00:00:00.001 Started by upstream project "autotest-per-patch" build number 132580 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.007 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.769 The recommended git tool is: git 00:00:00.769 using credential 00000000-0000-0000-0000-000000000002 00:00:00.771 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/raid-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.781 Fetching changes from the remote Git repository 00:00:00.785 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.796 Using shallow fetch with depth 1 00:00:00.796 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.796 > git --version # timeout=10 00:00:00.806 > git --version # 'git version 2.39.2' 00:00:00.806 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.817 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.817 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.064 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.075 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.087 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:05.087 > git config core.sparsecheckout # timeout=10 00:00:05.100 > git read-tree -mu HEAD # timeout=10 00:00:05.118 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:05.141 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:05.142 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:05.229 [Pipeline] Start of Pipeline 00:00:05.242 [Pipeline] library 00:00:05.244 Loading library shm_lib@master 00:00:05.244 Library shm_lib@master is cached. Copying from home. 00:00:05.261 [Pipeline] node 00:00:05.272 Running on VM-host-SM17 in /var/jenkins/workspace/raid-vg-autotest 00:00:05.273 [Pipeline] { 00:00:05.284 [Pipeline] catchError 00:00:05.286 [Pipeline] { 00:00:05.296 [Pipeline] wrap 00:00:05.303 [Pipeline] { 00:00:05.312 [Pipeline] stage 00:00:05.314 [Pipeline] { (Prologue) 00:00:05.329 [Pipeline] echo 00:00:05.330 Node: VM-host-SM17 00:00:05.335 [Pipeline] cleanWs 00:00:05.342 [WS-CLEANUP] Deleting project workspace... 00:00:05.342 [WS-CLEANUP] Deferred wipeout is used... 00:00:05.349 [WS-CLEANUP] done 00:00:05.600 [Pipeline] setCustomBuildProperty 00:00:05.684 [Pipeline] httpRequest 00:00:05.991 [Pipeline] echo 00:00:05.994 Sorcerer 10.211.164.20 is alive 00:00:06.002 [Pipeline] retry 00:00:06.005 [Pipeline] { 00:00:06.040 [Pipeline] httpRequest 00:00:06.048 HttpMethod: GET 00:00:06.049 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.049 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.054 Response Code: HTTP/1.1 200 OK 00:00:06.056 Success: Status code 200 is in the accepted range: 200,404 00:00:06.056 Saving response body to /var/jenkins/workspace/raid-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.574 [Pipeline] } 00:00:06.590 [Pipeline] // retry 00:00:06.596 [Pipeline] sh 00:00:06.931 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:06.942 [Pipeline] httpRequest 00:00:07.353 [Pipeline] echo 00:00:07.354 Sorcerer 10.211.164.20 is alive 00:00:07.362 [Pipeline] retry 00:00:07.364 [Pipeline] { 00:00:07.373 [Pipeline] httpRequest 00:00:07.376 HttpMethod: GET 00:00:07.377 URL: http://10.211.164.20/packages/spdk_38b931b23c2d90c594fc9afa0bd539202c185f2a.tar.gz 00:00:07.377 Sending request to url: http://10.211.164.20/packages/spdk_38b931b23c2d90c594fc9afa0bd539202c185f2a.tar.gz 00:00:07.378 Response Code: HTTP/1.1 404 Not Found 00:00:07.379 Success: Status code 404 is in the accepted range: 200,404 00:00:07.379 Saving response body to /var/jenkins/workspace/raid-vg-autotest/spdk_38b931b23c2d90c594fc9afa0bd539202c185f2a.tar.gz 00:00:07.381 [Pipeline] } 00:00:07.392 [Pipeline] // retry 00:00:07.397 [Pipeline] sh 00:00:07.675 + rm -f spdk_38b931b23c2d90c594fc9afa0bd539202c185f2a.tar.gz 00:00:07.686 [Pipeline] retry 00:00:07.687 [Pipeline] { 00:00:07.701 [Pipeline] checkout 00:00:07.707 The recommended git tool is: NONE 00:00:07.715 using credential 00000000-0000-0000-0000-000000000002 00:00:07.717 Wiping out workspace first. 00:00:07.724 Cloning the remote Git repository 00:00:07.726 Honoring refspec on initial clone 00:00:07.728 Cloning repository https://review.spdk.io/gerrit/a/spdk/spdk 00:00:07.728 > git init /var/jenkins/workspace/raid-vg-autotest/spdk # timeout=10 00:00:07.735 Using reference repository: /var/ci_repos/spdk_multi 00:00:07.735 Fetching upstream changes from https://review.spdk.io/gerrit/a/spdk/spdk 00:00:07.735 > git --version # timeout=10 00:00:07.739 > git --version # 'git version 2.25.1' 00:00:07.739 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:07.742 Setting http proxy: proxy-dmz.intel.com:911 00:00:07.742 > git fetch --tags --force --progress -- https://review.spdk.io/gerrit/a/spdk/spdk refs/changes/29/23629/8 +refs/heads/master:refs/remotes/origin/master # timeout=10 00:00:56.598 Avoid second fetch 00:00:56.613 Checking out Revision 38b931b23c2d90c594fc9afa0bd539202c185f2a (FETCH_HEAD) 00:00:56.903 Commit message: "nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write" 00:00:56.911 First time build. Skipping changelog. 00:00:56.578 > git config remote.origin.url https://review.spdk.io/gerrit/a/spdk/spdk # timeout=10 00:00:56.583 > git config --add remote.origin.fetch refs/changes/29/23629/8 # timeout=10 00:00:56.587 > git config --add remote.origin.fetch +refs/heads/master:refs/remotes/origin/master # timeout=10 00:00:56.600 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:56.607 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:56.615 > git config core.sparsecheckout # timeout=10 00:00:56.619 > git checkout -f 38b931b23c2d90c594fc9afa0bd539202c185f2a # timeout=10 00:00:56.905 > git rev-list --no-walk 5977028896021975fabe08ce8485b4d939e7798e # timeout=10 00:00:56.915 > git remote # timeout=10 00:00:56.920 > git submodule init # timeout=10 00:00:56.975 > git submodule sync # timeout=10 00:00:57.028 > git config --get remote.origin.url # timeout=10 00:00:57.036 > git submodule init # timeout=10 00:00:57.080 > git config -f .gitmodules --get-regexp ^submodule\.(.+)\.url # timeout=10 00:00:57.084 > git config --get submodule.dpdk.url # timeout=10 00:00:57.087 > git remote # timeout=10 00:00:57.092 > git config --get remote.origin.url # timeout=10 00:00:57.096 > git config -f .gitmodules --get submodule.dpdk.path # timeout=10 00:00:57.099 > git config --get submodule.intel-ipsec-mb.url # timeout=10 00:00:57.101 > git remote # timeout=10 00:00:57.105 > git config --get remote.origin.url # timeout=10 00:00:57.108 > git config -f .gitmodules --get submodule.intel-ipsec-mb.path # timeout=10 00:00:57.110 > git config --get submodule.isa-l.url # timeout=10 00:00:57.113 > git remote # timeout=10 00:00:57.119 > git config --get remote.origin.url # timeout=10 00:00:57.123 > git config -f .gitmodules --get submodule.isa-l.path # timeout=10 00:00:57.126 > git config --get submodule.ocf.url # timeout=10 00:00:57.129 > git remote # timeout=10 00:00:57.132 > git config --get remote.origin.url # timeout=10 00:00:57.135 > git config -f .gitmodules --get submodule.ocf.path # timeout=10 00:00:57.138 > git config --get submodule.libvfio-user.url # timeout=10 00:00:57.142 > git remote # timeout=10 00:00:57.146 > git config --get remote.origin.url # timeout=10 00:00:57.149 > git config -f .gitmodules --get submodule.libvfio-user.path # timeout=10 00:00:57.153 > git config --get submodule.xnvme.url # timeout=10 00:00:57.156 > git remote # timeout=10 00:00:57.161 > git config --get remote.origin.url # timeout=10 00:00:57.164 > git config -f .gitmodules --get submodule.xnvme.path # timeout=10 00:00:57.167 > git config --get submodule.isa-l-crypto.url # timeout=10 00:00:57.171 > git remote # timeout=10 00:00:57.175 > git config --get remote.origin.url # timeout=10 00:00:57.179 > git config -f .gitmodules --get submodule.isa-l-crypto.path # timeout=10 00:00:57.184 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:57.184 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:57.184 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:57.184 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:57.184 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:57.184 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:57.184 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:57.187 Setting http proxy: proxy-dmz.intel.com:911 00:00:57.187 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi dpdk # timeout=10 00:00:57.188 Setting http proxy: proxy-dmz.intel.com:911 00:00:57.188 Setting http proxy: proxy-dmz.intel.com:911 00:00:57.188 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi ocf # timeout=10 00:00:57.188 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l-crypto # timeout=10 00:00:57.188 Setting http proxy: proxy-dmz.intel.com:911 00:00:57.188 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi libvfio-user # timeout=10 00:00:57.188 Setting http proxy: proxy-dmz.intel.com:911 00:00:57.188 Setting http proxy: proxy-dmz.intel.com:911 00:00:57.188 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi xnvme # timeout=10 00:00:57.188 Setting http proxy: proxy-dmz.intel.com:911 00:00:57.188 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi isa-l # timeout=10 00:00:57.188 > git submodule update --init --recursive --reference /var/ci_repos/spdk_multi intel-ipsec-mb # timeout=10 00:01:27.528 [Pipeline] dir 00:01:27.528 Running in /var/jenkins/workspace/raid-vg-autotest/spdk 00:01:27.530 [Pipeline] { 00:01:27.544 [Pipeline] sh 00:01:27.839 ++ nproc 00:01:27.839 + threads=88 00:01:27.839 + git repack -a -d --threads=88 00:01:32.052 + git submodule foreach git repack -a -d --threads=88 00:01:32.052 Entering 'dpdk' 00:01:35.338 Entering 'intel-ipsec-mb' 00:01:35.907 Entering 'isa-l' 00:01:36.166 Entering 'isa-l-crypto' 00:01:36.166 Entering 'libvfio-user' 00:01:36.425 Entering 'ocf' 00:01:36.994 Entering 'xnvme' 00:01:37.253 + find .git -type f -name alternates -print -delete 00:01:37.253 .git/objects/info/alternates 00:01:37.253 .git/modules/libvfio-user/objects/info/alternates 00:01:37.253 .git/modules/intel-ipsec-mb/objects/info/alternates 00:01:37.253 .git/modules/isa-l/objects/info/alternates 00:01:37.253 .git/modules/isa-l-crypto/objects/info/alternates 00:01:37.253 .git/modules/dpdk/objects/info/alternates 00:01:37.253 .git/modules/ocf/objects/info/alternates 00:01:37.253 .git/modules/xnvme/objects/info/alternates 00:01:37.263 [Pipeline] } 00:01:37.281 [Pipeline] // dir 00:01:37.286 [Pipeline] } 00:01:37.303 [Pipeline] // retry 00:01:37.312 [Pipeline] sh 00:01:37.593 + hash pigz 00:01:37.593 + tar -czf spdk_38b931b23c2d90c594fc9afa0bd539202c185f2a.tar.gz spdk 00:01:49.819 [Pipeline] retry 00:01:49.821 [Pipeline] { 00:01:49.835 [Pipeline] httpRequest 00:01:49.843 HttpMethod: PUT 00:01:49.844 URL: http://10.211.164.20/cgi-bin/sorcerer.py?group=packages&filename=spdk_38b931b23c2d90c594fc9afa0bd539202c185f2a.tar.gz 00:01:49.845 Sending request to url: http://10.211.164.20/cgi-bin/sorcerer.py?group=packages&filename=spdk_38b931b23c2d90c594fc9afa0bd539202c185f2a.tar.gz 00:02:00.676 Response Code: HTTP/1.1 200 OK 00:02:00.684 Success: Status code 200 is in the accepted range: 200 00:02:00.688 [Pipeline] } 00:02:00.707 [Pipeline] // retry 00:02:00.715 [Pipeline] echo 00:02:00.718 00:02:00.718 Locking 00:02:00.718 Waited 0s for lock 00:02:00.718 Everything Fine. Saved: /storage/packages/spdk_38b931b23c2d90c594fc9afa0bd539202c185f2a.tar.gz 00:02:00.718 00:02:00.724 [Pipeline] sh 00:02:01.062 + git -C spdk log --oneline -n5 00:02:01.062 38b931b23 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:02:01.062 2f2acf4eb doc: move nvmf_tracing.md to tracing.md 00:02:01.062 5592070b3 doc: update nvmf_tracing.md 00:02:01.062 5ca6db5da nvme_spec: Add SPDK_NVME_IO_FLAGS_PRCHK_MASK 00:02:01.062 f7ce15267 bdev: Insert or overwrite metadata using bounce/accel buffer if NVMe PRACT is set 00:02:01.081 [Pipeline] writeFile 00:02:01.097 [Pipeline] sh 00:02:01.380 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:02:01.393 [Pipeline] sh 00:02:01.676 + cat autorun-spdk.conf 00:02:01.676 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:01.676 SPDK_RUN_ASAN=1 00:02:01.676 SPDK_RUN_UBSAN=1 00:02:01.676 SPDK_TEST_RAID=1 00:02:01.676 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:01.684 RUN_NIGHTLY=0 00:02:01.686 [Pipeline] } 00:02:01.703 [Pipeline] // stage 00:02:01.720 [Pipeline] stage 00:02:01.722 [Pipeline] { (Run VM) 00:02:01.736 [Pipeline] sh 00:02:02.019 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:02:02.019 + echo 'Start stage prepare_nvme.sh' 00:02:02.019 Start stage prepare_nvme.sh 00:02:02.019 + [[ -n 4 ]] 00:02:02.019 + disk_prefix=ex4 00:02:02.019 + [[ -n /var/jenkins/workspace/raid-vg-autotest ]] 00:02:02.019 + [[ -e /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf ]] 00:02:02.019 + source /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf 00:02:02.019 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:02.019 ++ SPDK_RUN_ASAN=1 00:02:02.019 ++ SPDK_RUN_UBSAN=1 00:02:02.019 ++ SPDK_TEST_RAID=1 00:02:02.019 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:02.019 ++ RUN_NIGHTLY=0 00:02:02.019 + cd /var/jenkins/workspace/raid-vg-autotest 00:02:02.019 + nvme_files=() 00:02:02.019 + declare -A nvme_files 00:02:02.019 + backend_dir=/var/lib/libvirt/images/backends 00:02:02.019 + nvme_files['nvme.img']=5G 00:02:02.019 + nvme_files['nvme-cmb.img']=5G 00:02:02.019 + nvme_files['nvme-multi0.img']=4G 00:02:02.019 + nvme_files['nvme-multi1.img']=4G 00:02:02.019 + nvme_files['nvme-multi2.img']=4G 00:02:02.019 + nvme_files['nvme-openstack.img']=8G 00:02:02.019 + nvme_files['nvme-zns.img']=5G 00:02:02.019 + (( SPDK_TEST_NVME_PMR == 1 )) 00:02:02.019 + (( SPDK_TEST_FTL == 1 )) 00:02:02.019 + (( SPDK_TEST_NVME_FDP == 1 )) 00:02:02.019 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:02:02.019 + for nvme in "${!nvme_files[@]}" 00:02:02.019 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi2.img -s 4G 00:02:02.019 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:02:02.019 + for nvme in "${!nvme_files[@]}" 00:02:02.019 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-cmb.img -s 5G 00:02:02.019 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:02:02.019 + for nvme in "${!nvme_files[@]}" 00:02:02.019 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-openstack.img -s 8G 00:02:02.019 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:02:02.019 + for nvme in "${!nvme_files[@]}" 00:02:02.019 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-zns.img -s 5G 00:02:02.019 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:02:02.019 + for nvme in "${!nvme_files[@]}" 00:02:02.019 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi1.img -s 4G 00:02:02.019 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:02:02.019 + for nvme in "${!nvme_files[@]}" 00:02:02.019 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme-multi0.img -s 4G 00:02:02.019 Formatting '/var/lib/libvirt/images/backends/ex4-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:02:02.019 + for nvme in "${!nvme_files[@]}" 00:02:02.019 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex4-nvme.img -s 5G 00:02:02.277 Formatting '/var/lib/libvirt/images/backends/ex4-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:02:02.277 ++ sudo grep -rl ex4-nvme.img /etc/libvirt/qemu 00:02:02.277 + echo 'End stage prepare_nvme.sh' 00:02:02.277 End stage prepare_nvme.sh 00:02:02.289 [Pipeline] sh 00:02:02.571 + DISTRO=fedora39 CPUS=10 RAM=12288 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:02:02.572 Setup: -n 10 -s 12288 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex4-nvme.img -b /var/lib/libvirt/images/backends/ex4-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img -H -a -v -f fedora39 00:02:02.572 00:02:02.572 DIR=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant 00:02:02.572 SPDK_DIR=/var/jenkins/workspace/raid-vg-autotest/spdk 00:02:02.572 VAGRANT_TARGET=/var/jenkins/workspace/raid-vg-autotest 00:02:02.572 HELP=0 00:02:02.572 DRY_RUN=0 00:02:02.572 NVME_FILE=/var/lib/libvirt/images/backends/ex4-nvme.img,/var/lib/libvirt/images/backends/ex4-nvme-multi0.img, 00:02:02.572 NVME_DISKS_TYPE=nvme,nvme, 00:02:02.572 NVME_AUTO_CREATE=0 00:02:02.572 NVME_DISKS_NAMESPACES=,/var/lib/libvirt/images/backends/ex4-nvme-multi1.img:/var/lib/libvirt/images/backends/ex4-nvme-multi2.img, 00:02:02.572 NVME_CMB=,, 00:02:02.572 NVME_PMR=,, 00:02:02.572 NVME_ZNS=,, 00:02:02.572 NVME_MS=,, 00:02:02.572 NVME_FDP=,, 00:02:02.572 SPDK_VAGRANT_DISTRO=fedora39 00:02:02.572 SPDK_VAGRANT_VMCPU=10 00:02:02.572 SPDK_VAGRANT_VMRAM=12288 00:02:02.572 SPDK_VAGRANT_PROVIDER=libvirt 00:02:02.572 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:02:02.572 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:02:02.572 SPDK_OPENSTACK_NETWORK=0 00:02:02.572 VAGRANT_PACKAGE_BOX=0 00:02:02.572 VAGRANTFILE=/var/jenkins/workspace/raid-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:02:02.572 FORCE_DISTRO=true 00:02:02.572 VAGRANT_BOX_VERSION= 00:02:02.572 EXTRA_VAGRANTFILES= 00:02:02.572 NIC_MODEL=e1000 00:02:02.572 00:02:02.572 mkdir: created directory '/var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt' 00:02:02.572 /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt /var/jenkins/workspace/raid-vg-autotest 00:02:05.863 Bringing machine 'default' up with 'libvirt' provider... 00:02:05.863 ==> default: Creating image (snapshot of base box volume). 00:02:06.127 ==> default: Creating domain with the following settings... 00:02:06.127 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1732716103_ee16deca877237fdb67a 00:02:06.127 ==> default: -- Domain type: kvm 00:02:06.127 ==> default: -- Cpus: 10 00:02:06.127 ==> default: -- Feature: acpi 00:02:06.127 ==> default: -- Feature: apic 00:02:06.127 ==> default: -- Feature: pae 00:02:06.127 ==> default: -- Memory: 12288M 00:02:06.127 ==> default: -- Memory Backing: hugepages: 00:02:06.127 ==> default: -- Management MAC: 00:02:06.127 ==> default: -- Loader: 00:02:06.127 ==> default: -- Nvram: 00:02:06.127 ==> default: -- Base box: spdk/fedora39 00:02:06.127 ==> default: -- Storage pool: default 00:02:06.127 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1732716103_ee16deca877237fdb67a.img (20G) 00:02:06.127 ==> default: -- Volume Cache: default 00:02:06.127 ==> default: -- Kernel: 00:02:06.127 ==> default: -- Initrd: 00:02:06.127 ==> default: -- Graphics Type: vnc 00:02:06.127 ==> default: -- Graphics Port: -1 00:02:06.127 ==> default: -- Graphics IP: 127.0.0.1 00:02:06.127 ==> default: -- Graphics Password: Not defined 00:02:06.127 ==> default: -- Video Type: cirrus 00:02:06.127 ==> default: -- Video VRAM: 9216 00:02:06.127 ==> default: -- Sound Type: 00:02:06.127 ==> default: -- Keymap: en-us 00:02:06.127 ==> default: -- TPM Path: 00:02:06.127 ==> default: -- INPUT: type=mouse, bus=ps2 00:02:06.127 ==> default: -- Command line args: 00:02:06.127 ==> default: -> value=-device, 00:02:06.127 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:02:06.127 ==> default: -> value=-drive, 00:02:06.127 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme.img,if=none,id=nvme-0-drive0, 00:02:06.127 ==> default: -> value=-device, 00:02:06.127 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:06.127 ==> default: -> value=-device, 00:02:06.127 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:02:06.127 ==> default: -> value=-drive, 00:02:06.127 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi0.img,if=none,id=nvme-1-drive0, 00:02:06.127 ==> default: -> value=-device, 00:02:06.127 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:06.127 ==> default: -> value=-drive, 00:02:06.127 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi1.img,if=none,id=nvme-1-drive1, 00:02:06.127 ==> default: -> value=-device, 00:02:06.127 ==> default: -> value=nvme-ns,drive=nvme-1-drive1,bus=nvme-1,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:06.127 ==> default: -> value=-drive, 00:02:06.127 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex4-nvme-multi2.img,if=none,id=nvme-1-drive2, 00:02:06.127 ==> default: -> value=-device, 00:02:06.127 ==> default: -> value=nvme-ns,drive=nvme-1-drive2,bus=nvme-1,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:02:06.127 ==> default: Creating shared folders metadata... 00:02:06.127 ==> default: Starting domain. 00:02:07.507 ==> default: Waiting for domain to get an IP address... 00:02:25.600 ==> default: Waiting for SSH to become available... 00:02:27.037 ==> default: Configuring and enabling network interfaces... 00:02:31.252 default: SSH address: 192.168.121.147:22 00:02:31.252 default: SSH username: vagrant 00:02:31.252 default: SSH auth method: private key 00:02:33.155 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:41.273 ==> default: Mounting SSHFS shared folder... 00:02:42.650 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/raid-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:02:42.650 ==> default: Checking Mount.. 00:02:44.026 ==> default: Folder Successfully Mounted! 00:02:44.026 ==> default: Running provisioner: file... 00:02:44.974 default: ~/.gitconfig => .gitconfig 00:02:45.233 00:02:45.233 SUCCESS! 00:02:45.233 00:02:45.233 cd to /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:02:45.233 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:45.233 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:02:45.233 00:02:45.243 [Pipeline] } 00:02:45.258 [Pipeline] // stage 00:02:45.268 [Pipeline] dir 00:02:45.269 Running in /var/jenkins/workspace/raid-vg-autotest/fedora39-libvirt 00:02:45.271 [Pipeline] { 00:02:45.283 [Pipeline] catchError 00:02:45.285 [Pipeline] { 00:02:45.299 [Pipeline] sh 00:02:45.580 + vagrant ssh-config --host vagrant 00:02:45.580 + sed -ne /^Host/,$p 00:02:45.580 + tee ssh_conf 00:02:48.870 Host vagrant 00:02:48.870 HostName 192.168.121.147 00:02:48.870 User vagrant 00:02:48.870 Port 22 00:02:48.871 UserKnownHostsFile /dev/null 00:02:48.871 StrictHostKeyChecking no 00:02:48.871 PasswordAuthentication no 00:02:48.871 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:02:48.871 IdentitiesOnly yes 00:02:48.871 LogLevel FATAL 00:02:48.871 ForwardAgent yes 00:02:48.871 ForwardX11 yes 00:02:48.871 00:02:48.885 [Pipeline] withEnv 00:02:48.887 [Pipeline] { 00:02:48.904 [Pipeline] sh 00:02:49.184 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:49.184 source /etc/os-release 00:02:49.184 [[ -e /image.version ]] && img=$(< /image.version) 00:02:49.184 # Minimal, systemd-like check. 00:02:49.184 if [[ -e /.dockerenv ]]; then 00:02:49.184 # Clear garbage from the node's name: 00:02:49.184 # agt-er_autotest_547-896 -> autotest_547-896 00:02:49.184 # $HOSTNAME is the actual container id 00:02:49.184 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:49.184 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:49.184 # We can assume this is a mount from a host where container is running, 00:02:49.184 # so fetch its hostname to easily identify the target swarm worker. 00:02:49.184 container="$(< /etc/hostname) ($agent)" 00:02:49.184 else 00:02:49.184 # Fallback 00:02:49.184 container=$agent 00:02:49.184 fi 00:02:49.184 fi 00:02:49.184 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:49.184 00:02:49.454 [Pipeline] } 00:02:49.473 [Pipeline] // withEnv 00:02:49.483 [Pipeline] setCustomBuildProperty 00:02:49.499 [Pipeline] stage 00:02:49.502 [Pipeline] { (Tests) 00:02:49.522 [Pipeline] sh 00:02:49.802 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:50.076 [Pipeline] sh 00:02:50.452 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:50.467 [Pipeline] timeout 00:02:50.468 Timeout set to expire in 1 hr 30 min 00:02:50.470 [Pipeline] { 00:02:50.485 [Pipeline] sh 00:02:50.768 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:51.336 HEAD is now at 38b931b23 nvmf: Set bdev_ext_io_opts::dif_check_flags_exclude_mask for read/write 00:02:51.349 [Pipeline] sh 00:02:51.629 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:51.903 [Pipeline] sh 00:02:52.183 + scp -F ssh_conf -r /var/jenkins/workspace/raid-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:52.455 [Pipeline] sh 00:02:52.735 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant JOB_BASE_NAME=raid-vg-autotest ./autoruner.sh spdk_repo 00:02:52.994 ++ readlink -f spdk_repo 00:02:52.994 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:52.994 + [[ -n /home/vagrant/spdk_repo ]] 00:02:52.994 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:52.994 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:52.994 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:52.994 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:52.994 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:52.994 + [[ raid-vg-autotest == pkgdep-* ]] 00:02:52.994 + cd /home/vagrant/spdk_repo 00:02:52.994 + source /etc/os-release 00:02:52.994 ++ NAME='Fedora Linux' 00:02:52.994 ++ VERSION='39 (Cloud Edition)' 00:02:52.994 ++ ID=fedora 00:02:52.994 ++ VERSION_ID=39 00:02:52.994 ++ VERSION_CODENAME= 00:02:52.994 ++ PLATFORM_ID=platform:f39 00:02:52.994 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:52.994 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:52.994 ++ LOGO=fedora-logo-icon 00:02:52.994 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:52.994 ++ HOME_URL=https://fedoraproject.org/ 00:02:52.994 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:52.994 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:52.994 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:52.994 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:52.994 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:52.994 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:52.994 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:52.994 ++ SUPPORT_END=2024-11-12 00:02:52.994 ++ VARIANT='Cloud Edition' 00:02:52.994 ++ VARIANT_ID=cloud 00:02:52.994 + uname -a 00:02:52.994 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:02:52.994 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:53.562 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:02:53.562 Hugepages 00:02:53.562 node hugesize free / total 00:02:53.562 node0 1048576kB 0 / 0 00:02:53.562 node0 2048kB 0 / 0 00:02:53.562 00:02:53.562 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:53.562 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:02:53.562 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:02:53.562 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:02:53.562 + rm -f /tmp/spdk-ld-path 00:02:53.562 + source autorun-spdk.conf 00:02:53.562 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:53.562 ++ SPDK_RUN_ASAN=1 00:02:53.562 ++ SPDK_RUN_UBSAN=1 00:02:53.562 ++ SPDK_TEST_RAID=1 00:02:53.562 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:53.562 ++ RUN_NIGHTLY=0 00:02:53.562 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:53.562 + [[ -n '' ]] 00:02:53.562 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:53.562 + for M in /var/spdk/build-*-manifest.txt 00:02:53.562 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:53.562 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:53.562 + for M in /var/spdk/build-*-manifest.txt 00:02:53.562 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:53.562 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:53.562 + for M in /var/spdk/build-*-manifest.txt 00:02:53.562 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:53.562 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:53.562 ++ uname 00:02:53.562 + [[ Linux == \L\i\n\u\x ]] 00:02:53.562 + sudo dmesg -T 00:02:53.562 + sudo dmesg --clear 00:02:53.562 + dmesg_pid=5213 00:02:53.562 + [[ Fedora Linux == FreeBSD ]] 00:02:53.562 + sudo dmesg -Tw 00:02:53.562 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:53.562 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:53.562 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:53.562 + [[ -x /usr/src/fio-static/fio ]] 00:02:53.562 + export FIO_BIN=/usr/src/fio-static/fio 00:02:53.562 + FIO_BIN=/usr/src/fio-static/fio 00:02:53.562 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:53.562 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:53.562 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:53.562 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:53.562 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:53.562 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:53.562 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:53.562 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:53.563 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:53.822 14:02:30 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:53.822 14:02:30 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:53.822 14:02:30 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:53.822 14:02:30 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_RUN_ASAN=1 00:02:53.822 14:02:30 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_RUN_UBSAN=1 00:02:53.822 14:02:30 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_RAID=1 00:02:53.822 14:02:30 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:53.822 14:02:30 -- spdk_repo/autorun-spdk.conf@6 -- $ RUN_NIGHTLY=0 00:02:53.822 14:02:30 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:53.822 14:02:30 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:53.822 14:02:30 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:02:53.822 14:02:30 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:53.822 14:02:30 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:53.822 14:02:30 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:53.822 14:02:30 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:53.822 14:02:30 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:53.822 14:02:30 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.822 14:02:30 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.822 14:02:30 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.822 14:02:30 -- paths/export.sh@5 -- $ export PATH 00:02:53.823 14:02:30 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:53.823 14:02:30 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:53.823 14:02:30 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:53.823 14:02:30 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732716150.XXXXXX 00:02:53.823 14:02:30 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732716150.ZIT1ab 00:02:53.823 14:02:30 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:53.823 14:02:30 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:02:53.823 14:02:30 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:53.823 14:02:30 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:53.823 14:02:30 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:53.823 14:02:30 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:53.823 14:02:30 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:53.823 14:02:30 -- common/autotest_common.sh@10 -- $ set +x 00:02:53.823 14:02:30 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f' 00:02:53.823 14:02:30 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:53.823 14:02:30 -- pm/common@17 -- $ local monitor 00:02:53.823 14:02:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.823 14:02:30 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:53.823 14:02:30 -- pm/common@25 -- $ sleep 1 00:02:53.823 14:02:30 -- pm/common@21 -- $ date +%s 00:02:53.823 14:02:30 -- pm/common@21 -- $ date +%s 00:02:53.823 14:02:30 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732716150 00:02:53.823 14:02:30 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1732716150 00:02:53.823 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732716150_collect-vmstat.pm.log 00:02:53.823 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1732716150_collect-cpu-load.pm.log 00:02:54.761 14:02:31 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:54.761 14:02:31 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:54.761 14:02:31 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:54.761 14:02:31 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:54.761 14:02:31 -- spdk/autobuild.sh@16 -- $ date -u 00:02:54.761 Wed Nov 27 02:02:31 PM UTC 2024 00:02:54.761 14:02:31 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:54.761 v25.01-pre-272-g38b931b23 00:02:54.761 14:02:31 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:02:54.761 14:02:31 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:02:54.761 14:02:31 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:54.761 14:02:31 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:54.761 14:02:31 -- common/autotest_common.sh@10 -- $ set +x 00:02:54.761 ************************************ 00:02:54.761 START TEST asan 00:02:54.761 ************************************ 00:02:54.761 using asan 00:02:54.761 14:02:31 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:02:54.761 00:02:54.761 real 0m0.000s 00:02:54.761 user 0m0.000s 00:02:54.761 sys 0m0.000s 00:02:54.761 14:02:31 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:54.761 ************************************ 00:02:54.761 END TEST asan 00:02:54.761 ************************************ 00:02:54.761 14:02:31 asan -- common/autotest_common.sh@10 -- $ set +x 00:02:54.761 14:02:32 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:54.761 14:02:32 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:54.761 14:02:32 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:54.761 14:02:32 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:54.761 14:02:32 -- common/autotest_common.sh@10 -- $ set +x 00:02:55.020 ************************************ 00:02:55.020 START TEST ubsan 00:02:55.020 ************************************ 00:02:55.020 using ubsan 00:02:55.020 14:02:32 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:55.020 00:02:55.020 real 0m0.000s 00:02:55.020 user 0m0.000s 00:02:55.020 sys 0m0.000s 00:02:55.020 14:02:32 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:55.020 ************************************ 00:02:55.020 END TEST ubsan 00:02:55.020 ************************************ 00:02:55.020 14:02:32 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:55.020 14:02:32 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:55.020 14:02:32 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:55.020 14:02:32 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:55.020 14:02:32 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:55.020 14:02:32 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:55.020 14:02:32 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:55.020 14:02:32 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:55.020 14:02:32 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:55.020 14:02:32 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-raid5f --with-shared 00:02:55.020 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:55.020 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:55.589 Using 'verbs' RDMA provider 00:03:11.415 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:23.627 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:23.627 Creating mk/config.mk...done. 00:03:23.627 Creating mk/cc.flags.mk...done. 00:03:23.627 Type 'make' to build. 00:03:23.627 14:03:00 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:03:23.627 14:03:00 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:23.627 14:03:00 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:23.627 14:03:00 -- common/autotest_common.sh@10 -- $ set +x 00:03:23.627 ************************************ 00:03:23.627 START TEST make 00:03:23.627 ************************************ 00:03:23.627 14:03:00 make -- common/autotest_common.sh@1129 -- $ make -j10 00:03:23.627 make[1]: Nothing to be done for 'all'. 00:03:35.841 The Meson build system 00:03:35.841 Version: 1.5.0 00:03:35.841 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:35.841 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:35.841 Build type: native build 00:03:35.841 Program cat found: YES (/usr/bin/cat) 00:03:35.841 Project name: DPDK 00:03:35.841 Project version: 24.03.0 00:03:35.841 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:35.841 C linker for the host machine: cc ld.bfd 2.40-14 00:03:35.841 Host machine cpu family: x86_64 00:03:35.841 Host machine cpu: x86_64 00:03:35.841 Message: ## Building in Developer Mode ## 00:03:35.841 Program pkg-config found: YES (/usr/bin/pkg-config) 00:03:35.841 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:35.841 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:35.842 Program python3 found: YES (/usr/bin/python3) 00:03:35.842 Program cat found: YES (/usr/bin/cat) 00:03:35.842 Compiler for C supports arguments -march=native: YES 00:03:35.842 Checking for size of "void *" : 8 00:03:35.842 Checking for size of "void *" : 8 (cached) 00:03:35.842 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:35.842 Library m found: YES 00:03:35.842 Library numa found: YES 00:03:35.842 Has header "numaif.h" : YES 00:03:35.842 Library fdt found: NO 00:03:35.842 Library execinfo found: NO 00:03:35.842 Has header "execinfo.h" : YES 00:03:35.842 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:35.842 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:35.842 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:35.842 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:35.842 Run-time dependency openssl found: YES 3.1.1 00:03:35.842 Run-time dependency libpcap found: YES 1.10.4 00:03:35.842 Has header "pcap.h" with dependency libpcap: YES 00:03:35.842 Compiler for C supports arguments -Wcast-qual: YES 00:03:35.842 Compiler for C supports arguments -Wdeprecated: YES 00:03:35.842 Compiler for C supports arguments -Wformat: YES 00:03:35.842 Compiler for C supports arguments -Wformat-nonliteral: NO 00:03:35.842 Compiler for C supports arguments -Wformat-security: NO 00:03:35.842 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:35.842 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:35.842 Compiler for C supports arguments -Wnested-externs: YES 00:03:35.842 Compiler for C supports arguments -Wold-style-definition: YES 00:03:35.842 Compiler for C supports arguments -Wpointer-arith: YES 00:03:35.842 Compiler for C supports arguments -Wsign-compare: YES 00:03:35.842 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:35.842 Compiler for C supports arguments -Wundef: YES 00:03:35.842 Compiler for C supports arguments -Wwrite-strings: YES 00:03:35.842 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:35.842 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:03:35.842 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:35.842 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:03:35.842 Program objdump found: YES (/usr/bin/objdump) 00:03:35.842 Compiler for C supports arguments -mavx512f: YES 00:03:35.842 Checking if "AVX512 checking" compiles: YES 00:03:35.842 Fetching value of define "__SSE4_2__" : 1 00:03:35.842 Fetching value of define "__AES__" : 1 00:03:35.842 Fetching value of define "__AVX__" : 1 00:03:35.842 Fetching value of define "__AVX2__" : 1 00:03:35.842 Fetching value of define "__AVX512BW__" : (undefined) 00:03:35.842 Fetching value of define "__AVX512CD__" : (undefined) 00:03:35.842 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:35.842 Fetching value of define "__AVX512F__" : (undefined) 00:03:35.842 Fetching value of define "__AVX512VL__" : (undefined) 00:03:35.842 Fetching value of define "__PCLMUL__" : 1 00:03:35.842 Fetching value of define "__RDRND__" : 1 00:03:35.842 Fetching value of define "__RDSEED__" : 1 00:03:35.842 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:35.842 Fetching value of define "__znver1__" : (undefined) 00:03:35.842 Fetching value of define "__znver2__" : (undefined) 00:03:35.842 Fetching value of define "__znver3__" : (undefined) 00:03:35.842 Fetching value of define "__znver4__" : (undefined) 00:03:35.842 Library asan found: YES 00:03:35.842 Compiler for C supports arguments -Wno-format-truncation: YES 00:03:35.842 Message: lib/log: Defining dependency "log" 00:03:35.842 Message: lib/kvargs: Defining dependency "kvargs" 00:03:35.842 Message: lib/telemetry: Defining dependency "telemetry" 00:03:35.842 Library rt found: YES 00:03:35.842 Checking for function "getentropy" : NO 00:03:35.842 Message: lib/eal: Defining dependency "eal" 00:03:35.842 Message: lib/ring: Defining dependency "ring" 00:03:35.842 Message: lib/rcu: Defining dependency "rcu" 00:03:35.842 Message: lib/mempool: Defining dependency "mempool" 00:03:35.842 Message: lib/mbuf: Defining dependency "mbuf" 00:03:35.842 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:35.842 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:35.842 Compiler for C supports arguments -mpclmul: YES 00:03:35.842 Compiler for C supports arguments -maes: YES 00:03:35.842 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:35.842 Compiler for C supports arguments -mavx512bw: YES 00:03:35.842 Compiler for C supports arguments -mavx512dq: YES 00:03:35.842 Compiler for C supports arguments -mavx512vl: YES 00:03:35.842 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:35.842 Compiler for C supports arguments -mavx2: YES 00:03:35.842 Compiler for C supports arguments -mavx: YES 00:03:35.842 Message: lib/net: Defining dependency "net" 00:03:35.842 Message: lib/meter: Defining dependency "meter" 00:03:35.842 Message: lib/ethdev: Defining dependency "ethdev" 00:03:35.842 Message: lib/pci: Defining dependency "pci" 00:03:35.842 Message: lib/cmdline: Defining dependency "cmdline" 00:03:35.842 Message: lib/hash: Defining dependency "hash" 00:03:35.842 Message: lib/timer: Defining dependency "timer" 00:03:35.842 Message: lib/compressdev: Defining dependency "compressdev" 00:03:35.842 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:35.842 Message: lib/dmadev: Defining dependency "dmadev" 00:03:35.842 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:35.842 Message: lib/power: Defining dependency "power" 00:03:35.842 Message: lib/reorder: Defining dependency "reorder" 00:03:35.842 Message: lib/security: Defining dependency "security" 00:03:35.842 Has header "linux/userfaultfd.h" : YES 00:03:35.842 Has header "linux/vduse.h" : YES 00:03:35.842 Message: lib/vhost: Defining dependency "vhost" 00:03:35.842 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:03:35.842 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:35.842 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:35.842 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:35.842 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:35.842 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:35.842 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:35.842 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:35.842 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:35.842 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:35.842 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:35.842 Configuring doxy-api-html.conf using configuration 00:03:35.842 Configuring doxy-api-man.conf using configuration 00:03:35.842 Program mandb found: YES (/usr/bin/mandb) 00:03:35.842 Program sphinx-build found: NO 00:03:35.842 Configuring rte_build_config.h using configuration 00:03:35.842 Message: 00:03:35.842 ================= 00:03:35.842 Applications Enabled 00:03:35.842 ================= 00:03:35.842 00:03:35.842 apps: 00:03:35.842 00:03:35.842 00:03:35.842 Message: 00:03:35.842 ================= 00:03:35.842 Libraries Enabled 00:03:35.842 ================= 00:03:35.842 00:03:35.842 libs: 00:03:35.842 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:35.842 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:35.842 cryptodev, dmadev, power, reorder, security, vhost, 00:03:35.842 00:03:35.842 Message: 00:03:35.842 =============== 00:03:35.842 Drivers Enabled 00:03:35.842 =============== 00:03:35.842 00:03:35.842 common: 00:03:35.842 00:03:35.842 bus: 00:03:35.842 pci, vdev, 00:03:35.842 mempool: 00:03:35.842 ring, 00:03:35.842 dma: 00:03:35.842 00:03:35.842 net: 00:03:35.842 00:03:35.842 crypto: 00:03:35.842 00:03:35.842 compress: 00:03:35.842 00:03:35.842 vdpa: 00:03:35.842 00:03:35.842 00:03:35.842 Message: 00:03:35.842 ================= 00:03:35.842 Content Skipped 00:03:35.842 ================= 00:03:35.842 00:03:35.842 apps: 00:03:35.842 dumpcap: explicitly disabled via build config 00:03:35.842 graph: explicitly disabled via build config 00:03:35.842 pdump: explicitly disabled via build config 00:03:35.842 proc-info: explicitly disabled via build config 00:03:35.842 test-acl: explicitly disabled via build config 00:03:35.842 test-bbdev: explicitly disabled via build config 00:03:35.842 test-cmdline: explicitly disabled via build config 00:03:35.842 test-compress-perf: explicitly disabled via build config 00:03:35.842 test-crypto-perf: explicitly disabled via build config 00:03:35.842 test-dma-perf: explicitly disabled via build config 00:03:35.842 test-eventdev: explicitly disabled via build config 00:03:35.842 test-fib: explicitly disabled via build config 00:03:35.842 test-flow-perf: explicitly disabled via build config 00:03:35.842 test-gpudev: explicitly disabled via build config 00:03:35.842 test-mldev: explicitly disabled via build config 00:03:35.842 test-pipeline: explicitly disabled via build config 00:03:35.842 test-pmd: explicitly disabled via build config 00:03:35.842 test-regex: explicitly disabled via build config 00:03:35.842 test-sad: explicitly disabled via build config 00:03:35.842 test-security-perf: explicitly disabled via build config 00:03:35.842 00:03:35.842 libs: 00:03:35.842 argparse: explicitly disabled via build config 00:03:35.842 metrics: explicitly disabled via build config 00:03:35.842 acl: explicitly disabled via build config 00:03:35.842 bbdev: explicitly disabled via build config 00:03:35.842 bitratestats: explicitly disabled via build config 00:03:35.842 bpf: explicitly disabled via build config 00:03:35.842 cfgfile: explicitly disabled via build config 00:03:35.842 distributor: explicitly disabled via build config 00:03:35.842 efd: explicitly disabled via build config 00:03:35.842 eventdev: explicitly disabled via build config 00:03:35.842 dispatcher: explicitly disabled via build config 00:03:35.842 gpudev: explicitly disabled via build config 00:03:35.842 gro: explicitly disabled via build config 00:03:35.842 gso: explicitly disabled via build config 00:03:35.842 ip_frag: explicitly disabled via build config 00:03:35.842 jobstats: explicitly disabled via build config 00:03:35.842 latencystats: explicitly disabled via build config 00:03:35.843 lpm: explicitly disabled via build config 00:03:35.843 member: explicitly disabled via build config 00:03:35.843 pcapng: explicitly disabled via build config 00:03:35.843 rawdev: explicitly disabled via build config 00:03:35.843 regexdev: explicitly disabled via build config 00:03:35.843 mldev: explicitly disabled via build config 00:03:35.843 rib: explicitly disabled via build config 00:03:35.843 sched: explicitly disabled via build config 00:03:35.843 stack: explicitly disabled via build config 00:03:35.843 ipsec: explicitly disabled via build config 00:03:35.843 pdcp: explicitly disabled via build config 00:03:35.843 fib: explicitly disabled via build config 00:03:35.843 port: explicitly disabled via build config 00:03:35.843 pdump: explicitly disabled via build config 00:03:35.843 table: explicitly disabled via build config 00:03:35.843 pipeline: explicitly disabled via build config 00:03:35.843 graph: explicitly disabled via build config 00:03:35.843 node: explicitly disabled via build config 00:03:35.843 00:03:35.843 drivers: 00:03:35.843 common/cpt: not in enabled drivers build config 00:03:35.843 common/dpaax: not in enabled drivers build config 00:03:35.843 common/iavf: not in enabled drivers build config 00:03:35.843 common/idpf: not in enabled drivers build config 00:03:35.843 common/ionic: not in enabled drivers build config 00:03:35.843 common/mvep: not in enabled drivers build config 00:03:35.843 common/octeontx: not in enabled drivers build config 00:03:35.843 bus/auxiliary: not in enabled drivers build config 00:03:35.843 bus/cdx: not in enabled drivers build config 00:03:35.843 bus/dpaa: not in enabled drivers build config 00:03:35.843 bus/fslmc: not in enabled drivers build config 00:03:35.843 bus/ifpga: not in enabled drivers build config 00:03:35.843 bus/platform: not in enabled drivers build config 00:03:35.843 bus/uacce: not in enabled drivers build config 00:03:35.843 bus/vmbus: not in enabled drivers build config 00:03:35.843 common/cnxk: not in enabled drivers build config 00:03:35.843 common/mlx5: not in enabled drivers build config 00:03:35.843 common/nfp: not in enabled drivers build config 00:03:35.843 common/nitrox: not in enabled drivers build config 00:03:35.843 common/qat: not in enabled drivers build config 00:03:35.843 common/sfc_efx: not in enabled drivers build config 00:03:35.843 mempool/bucket: not in enabled drivers build config 00:03:35.843 mempool/cnxk: not in enabled drivers build config 00:03:35.843 mempool/dpaa: not in enabled drivers build config 00:03:35.843 mempool/dpaa2: not in enabled drivers build config 00:03:35.843 mempool/octeontx: not in enabled drivers build config 00:03:35.843 mempool/stack: not in enabled drivers build config 00:03:35.843 dma/cnxk: not in enabled drivers build config 00:03:35.843 dma/dpaa: not in enabled drivers build config 00:03:35.843 dma/dpaa2: not in enabled drivers build config 00:03:35.843 dma/hisilicon: not in enabled drivers build config 00:03:35.843 dma/idxd: not in enabled drivers build config 00:03:35.843 dma/ioat: not in enabled drivers build config 00:03:35.843 dma/skeleton: not in enabled drivers build config 00:03:35.843 net/af_packet: not in enabled drivers build config 00:03:35.843 net/af_xdp: not in enabled drivers build config 00:03:35.843 net/ark: not in enabled drivers build config 00:03:35.843 net/atlantic: not in enabled drivers build config 00:03:35.843 net/avp: not in enabled drivers build config 00:03:35.843 net/axgbe: not in enabled drivers build config 00:03:35.843 net/bnx2x: not in enabled drivers build config 00:03:35.843 net/bnxt: not in enabled drivers build config 00:03:35.843 net/bonding: not in enabled drivers build config 00:03:35.843 net/cnxk: not in enabled drivers build config 00:03:35.843 net/cpfl: not in enabled drivers build config 00:03:35.843 net/cxgbe: not in enabled drivers build config 00:03:35.843 net/dpaa: not in enabled drivers build config 00:03:35.843 net/dpaa2: not in enabled drivers build config 00:03:35.843 net/e1000: not in enabled drivers build config 00:03:35.843 net/ena: not in enabled drivers build config 00:03:35.843 net/enetc: not in enabled drivers build config 00:03:35.843 net/enetfec: not in enabled drivers build config 00:03:35.843 net/enic: not in enabled drivers build config 00:03:35.843 net/failsafe: not in enabled drivers build config 00:03:35.843 net/fm10k: not in enabled drivers build config 00:03:35.843 net/gve: not in enabled drivers build config 00:03:35.843 net/hinic: not in enabled drivers build config 00:03:35.843 net/hns3: not in enabled drivers build config 00:03:35.843 net/i40e: not in enabled drivers build config 00:03:35.843 net/iavf: not in enabled drivers build config 00:03:35.843 net/ice: not in enabled drivers build config 00:03:35.843 net/idpf: not in enabled drivers build config 00:03:35.843 net/igc: not in enabled drivers build config 00:03:35.843 net/ionic: not in enabled drivers build config 00:03:35.843 net/ipn3ke: not in enabled drivers build config 00:03:35.843 net/ixgbe: not in enabled drivers build config 00:03:35.843 net/mana: not in enabled drivers build config 00:03:35.843 net/memif: not in enabled drivers build config 00:03:35.843 net/mlx4: not in enabled drivers build config 00:03:35.843 net/mlx5: not in enabled drivers build config 00:03:35.843 net/mvneta: not in enabled drivers build config 00:03:35.843 net/mvpp2: not in enabled drivers build config 00:03:35.843 net/netvsc: not in enabled drivers build config 00:03:35.843 net/nfb: not in enabled drivers build config 00:03:35.843 net/nfp: not in enabled drivers build config 00:03:35.843 net/ngbe: not in enabled drivers build config 00:03:35.843 net/null: not in enabled drivers build config 00:03:35.843 net/octeontx: not in enabled drivers build config 00:03:35.843 net/octeon_ep: not in enabled drivers build config 00:03:35.843 net/pcap: not in enabled drivers build config 00:03:35.843 net/pfe: not in enabled drivers build config 00:03:35.843 net/qede: not in enabled drivers build config 00:03:35.843 net/ring: not in enabled drivers build config 00:03:35.843 net/sfc: not in enabled drivers build config 00:03:35.843 net/softnic: not in enabled drivers build config 00:03:35.843 net/tap: not in enabled drivers build config 00:03:35.843 net/thunderx: not in enabled drivers build config 00:03:35.843 net/txgbe: not in enabled drivers build config 00:03:35.843 net/vdev_netvsc: not in enabled drivers build config 00:03:35.843 net/vhost: not in enabled drivers build config 00:03:35.843 net/virtio: not in enabled drivers build config 00:03:35.843 net/vmxnet3: not in enabled drivers build config 00:03:35.843 raw/*: missing internal dependency, "rawdev" 00:03:35.843 crypto/armv8: not in enabled drivers build config 00:03:35.843 crypto/bcmfs: not in enabled drivers build config 00:03:35.843 crypto/caam_jr: not in enabled drivers build config 00:03:35.843 crypto/ccp: not in enabled drivers build config 00:03:35.843 crypto/cnxk: not in enabled drivers build config 00:03:35.843 crypto/dpaa_sec: not in enabled drivers build config 00:03:35.843 crypto/dpaa2_sec: not in enabled drivers build config 00:03:35.843 crypto/ipsec_mb: not in enabled drivers build config 00:03:35.843 crypto/mlx5: not in enabled drivers build config 00:03:35.843 crypto/mvsam: not in enabled drivers build config 00:03:35.843 crypto/nitrox: not in enabled drivers build config 00:03:35.843 crypto/null: not in enabled drivers build config 00:03:35.843 crypto/octeontx: not in enabled drivers build config 00:03:35.843 crypto/openssl: not in enabled drivers build config 00:03:35.843 crypto/scheduler: not in enabled drivers build config 00:03:35.843 crypto/uadk: not in enabled drivers build config 00:03:35.843 crypto/virtio: not in enabled drivers build config 00:03:35.843 compress/isal: not in enabled drivers build config 00:03:35.843 compress/mlx5: not in enabled drivers build config 00:03:35.843 compress/nitrox: not in enabled drivers build config 00:03:35.843 compress/octeontx: not in enabled drivers build config 00:03:35.843 compress/zlib: not in enabled drivers build config 00:03:35.843 regex/*: missing internal dependency, "regexdev" 00:03:35.843 ml/*: missing internal dependency, "mldev" 00:03:35.843 vdpa/ifc: not in enabled drivers build config 00:03:35.843 vdpa/mlx5: not in enabled drivers build config 00:03:35.843 vdpa/nfp: not in enabled drivers build config 00:03:35.843 vdpa/sfc: not in enabled drivers build config 00:03:35.843 event/*: missing internal dependency, "eventdev" 00:03:35.843 baseband/*: missing internal dependency, "bbdev" 00:03:35.843 gpu/*: missing internal dependency, "gpudev" 00:03:35.843 00:03:35.843 00:03:35.843 Build targets in project: 85 00:03:35.843 00:03:35.843 DPDK 24.03.0 00:03:35.843 00:03:35.843 User defined options 00:03:35.843 buildtype : debug 00:03:35.843 default_library : shared 00:03:35.843 libdir : lib 00:03:35.843 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:35.843 b_sanitize : address 00:03:35.843 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:03:35.843 c_link_args : 00:03:35.843 cpu_instruction_set: native 00:03:35.843 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:35.843 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:35.843 enable_docs : false 00:03:35.843 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:03:35.843 enable_kmods : false 00:03:35.843 max_lcores : 128 00:03:35.843 tests : false 00:03:35.843 00:03:35.843 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:35.843 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:35.843 [1/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:35.843 [2/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:03:35.843 [3/268] Linking static target lib/librte_kvargs.a 00:03:35.843 [4/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:35.843 [5/268] Linking static target lib/librte_log.a 00:03:36.102 [6/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:36.360 [7/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.360 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:36.619 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:36.619 [10/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:36.619 [11/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:36.878 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:36.878 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:36.878 [14/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:36.878 [15/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:36.878 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:36.878 [17/268] Linking target lib/librte_log.so.24.1 00:03:36.878 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:36.878 [19/268] Linking static target lib/librte_telemetry.a 00:03:37.138 [20/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:37.138 [21/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:37.138 [22/268] Linking target lib/librte_kvargs.so.24.1 00:03:37.398 [23/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:37.398 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:37.658 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:37.658 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:37.658 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:37.658 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:37.917 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:37.917 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:37.917 [31/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:37.917 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:37.917 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:37.917 [34/268] Linking target lib/librte_telemetry.so.24.1 00:03:38.176 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:38.176 [36/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:38.435 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:38.435 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:38.435 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:38.435 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:38.435 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:38.694 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:38.694 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:38.694 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:38.953 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:38.953 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:38.953 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:38.953 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:39.211 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:39.211 [50/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:39.471 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:03:39.471 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:39.471 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:39.730 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:39.730 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:39.730 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:39.989 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:39.989 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:39.989 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:40.248 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:03:40.248 [61/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:40.248 [62/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:40.507 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:40.507 [64/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:40.507 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:40.507 [66/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:40.766 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:40.766 [68/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:03:40.766 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:03:41.024 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:03:41.024 [71/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:41.024 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:03:41.283 [73/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:41.283 [74/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:41.283 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:03:41.283 [76/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:03:41.283 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:03:41.554 [78/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:03:41.554 [79/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:03:41.554 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:03:41.554 [81/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:03:41.554 [82/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:03:41.813 [83/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:41.813 [84/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:41.813 [85/268] Linking static target lib/librte_ring.a 00:03:42.070 [86/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:42.070 [87/268] Linking static target lib/librte_eal.a 00:03:42.070 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:42.070 [89/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:42.328 [90/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:42.328 [91/268] Linking static target lib/librte_rcu.a 00:03:42.328 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:42.328 [93/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:42.328 [94/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:42.328 [95/268] Linking static target lib/librte_mempool.a 00:03:42.328 [96/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:42.328 [97/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:42.588 [98/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:42.588 [99/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:42.860 [100/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.144 [101/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:43.144 [102/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:43.144 [103/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:43.144 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:43.144 [105/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:43.144 [106/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:43.144 [107/268] Linking static target lib/librte_mbuf.a 00:03:43.144 [108/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:43.410 [109/268] Linking static target lib/librte_net.a 00:03:43.410 [110/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:43.410 [111/268] Linking static target lib/librte_meter.a 00:03:43.668 [112/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.668 [113/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.927 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:43.927 [115/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:43.927 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:43.927 [117/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:43.927 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:44.186 [119/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:44.186 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:44.754 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:44.754 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:45.013 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:45.013 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:45.013 [125/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:45.013 [126/268] Linking static target lib/librte_pci.a 00:03:45.013 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:45.013 [128/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:45.272 [129/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:45.272 [130/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:45.272 [131/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:45.530 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:45.530 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:45.530 [134/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:45.530 [135/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:45.530 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:45.530 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:45.789 [138/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:45.789 [139/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:45.789 [140/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:45.789 [141/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:03:45.789 [142/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:45.789 [143/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:45.789 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:46.047 [145/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:46.306 [146/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:46.306 [147/268] Linking static target lib/librte_cmdline.a 00:03:46.306 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:46.306 [149/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:46.564 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:46.564 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:46.564 [152/268] Linking static target lib/librte_ethdev.a 00:03:46.564 [153/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:46.564 [154/268] Linking static target lib/librte_timer.a 00:03:46.823 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:46.823 [156/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:46.823 [157/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:47.390 [158/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:47.390 [159/268] Linking static target lib/librte_hash.a 00:03:47.390 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:47.390 [161/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:47.390 [162/268] Linking static target lib/librte_compressdev.a 00:03:47.390 [163/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:47.390 [164/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:47.649 [165/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:03:47.649 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:47.908 [167/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:03:47.908 [168/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:47.908 [169/268] Linking static target lib/librte_dmadev.a 00:03:47.908 [170/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.195 [171/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:03:48.195 [172/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:03:48.195 [173/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:03:48.195 [174/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.453 [175/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.711 [176/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:03:48.711 [177/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:03:48.711 [178/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:48.969 [179/268] Linking static target lib/librte_cryptodev.a 00:03:48.969 [180/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:03:48.969 [181/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:48.969 [182/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:03:48.969 [183/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:03:48.969 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:03:49.535 [185/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:03:49.535 [186/268] Linking static target lib/librte_power.a 00:03:49.535 [187/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:49.535 [188/268] Linking static target lib/librte_reorder.a 00:03:49.535 [189/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:03:49.940 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:03:49.941 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:03:49.941 [192/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.198 [193/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:50.198 [194/268] Linking static target lib/librte_security.a 00:03:50.456 [195/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:03:50.715 [196/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.974 [197/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:50.974 [198/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:03:50.974 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:03:50.974 [200/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:51.232 [201/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:03:51.232 [202/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:51.491 [203/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:51.491 [204/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:51.491 [205/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:51.750 [206/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:03:51.750 [207/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:03:52.008 [208/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:52.008 [209/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:52.266 [210/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:03:52.266 [211/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:52.266 [212/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:52.266 [213/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:52.266 [214/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:52.266 [215/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:52.266 [216/268] Linking static target drivers/librte_bus_vdev.a 00:03:52.266 [217/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:52.525 [218/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:52.525 [219/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:52.525 [220/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:52.525 [221/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:52.525 [222/268] Linking static target drivers/librte_bus_pci.a 00:03:52.525 [223/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:52.525 [224/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:52.525 [225/268] Linking static target drivers/librte_mempool_ring.a 00:03:52.784 [226/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.043 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.302 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:03:53.560 [229/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:53.818 [230/268] Linking target lib/librte_eal.so.24.1 00:03:53.818 [231/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:53.818 [232/268] Linking target lib/librte_meter.so.24.1 00:03:53.818 [233/268] Linking target lib/librte_ring.so.24.1 00:03:53.818 [234/268] Linking target lib/librte_pci.so.24.1 00:03:53.818 [235/268] Linking target lib/librte_timer.so.24.1 00:03:54.076 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:03:54.076 [237/268] Linking target lib/librte_dmadev.so.24.1 00:03:54.076 [238/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:54.076 [239/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:54.076 [240/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:54.076 [241/268] Linking target lib/librte_rcu.so.24.1 00:03:54.076 [242/268] Linking target drivers/librte_bus_pci.so.24.1 00:03:54.076 [243/268] Linking target lib/librte_mempool.so.24.1 00:03:54.076 [244/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:03:54.076 [245/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:03:54.335 [246/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:54.335 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:54.335 [248/268] Linking target lib/librte_mbuf.so.24.1 00:03:54.335 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:03:54.593 [250/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:54.593 [251/268] Linking target lib/librte_net.so.24.1 00:03:54.593 [252/268] Linking target lib/librte_reorder.so.24.1 00:03:54.593 [253/268] Linking target lib/librte_compressdev.so.24.1 00:03:54.593 [254/268] Linking target lib/librte_cryptodev.so.24.1 00:03:54.593 [255/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:54.593 [256/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:54.593 [257/268] Linking target lib/librte_security.so.24.1 00:03:54.594 [258/268] Linking target lib/librte_cmdline.so.24.1 00:03:54.594 [259/268] Linking target lib/librte_hash.so.24.1 00:03:54.594 [260/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:54.852 [261/268] Linking target lib/librte_ethdev.so.24.1 00:03:54.852 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:03:55.111 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:03:55.111 [264/268] Linking target lib/librte_power.so.24.1 00:03:57.673 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:03:57.673 [266/268] Linking static target lib/librte_vhost.a 00:03:59.050 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:03:59.309 [268/268] Linking target lib/librte_vhost.so.24.1 00:03:59.309 INFO: autodetecting backend as ninja 00:03:59.309 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:04:21.243 CC lib/ut/ut.o 00:04:21.243 CC lib/ut_mock/mock.o 00:04:21.243 CC lib/log/log.o 00:04:21.243 CC lib/log/log_flags.o 00:04:21.243 CC lib/log/log_deprecated.o 00:04:21.243 LIB libspdk_ut.a 00:04:21.243 LIB libspdk_ut_mock.a 00:04:21.243 LIB libspdk_log.a 00:04:21.243 SO libspdk_ut.so.2.0 00:04:21.243 SO libspdk_ut_mock.so.6.0 00:04:21.243 SO libspdk_log.so.7.1 00:04:21.243 SYMLINK libspdk_ut.so 00:04:21.243 SYMLINK libspdk_ut_mock.so 00:04:21.243 SYMLINK libspdk_log.so 00:04:21.243 CC lib/dma/dma.o 00:04:21.243 CC lib/util/base64.o 00:04:21.243 CC lib/util/bit_array.o 00:04:21.243 CC lib/util/cpuset.o 00:04:21.243 CC lib/util/crc32.o 00:04:21.243 CC lib/util/crc16.o 00:04:21.243 CXX lib/trace_parser/trace.o 00:04:21.243 CC lib/util/crc32c.o 00:04:21.243 CC lib/ioat/ioat.o 00:04:21.243 CC lib/vfio_user/host/vfio_user_pci.o 00:04:21.243 CC lib/util/crc32_ieee.o 00:04:21.243 CC lib/util/crc64.o 00:04:21.243 CC lib/util/dif.o 00:04:21.243 CC lib/util/fd.o 00:04:21.243 LIB libspdk_dma.a 00:04:21.243 SO libspdk_dma.so.5.0 00:04:21.243 CC lib/util/fd_group.o 00:04:21.243 CC lib/util/file.o 00:04:21.243 CC lib/vfio_user/host/vfio_user.o 00:04:21.243 SYMLINK libspdk_dma.so 00:04:21.243 CC lib/util/hexlify.o 00:04:21.243 CC lib/util/iov.o 00:04:21.243 LIB libspdk_ioat.a 00:04:21.243 CC lib/util/math.o 00:04:21.243 SO libspdk_ioat.so.7.0 00:04:21.243 SYMLINK libspdk_ioat.so 00:04:21.243 CC lib/util/net.o 00:04:21.243 CC lib/util/pipe.o 00:04:21.243 CC lib/util/strerror_tls.o 00:04:21.243 CC lib/util/string.o 00:04:21.243 CC lib/util/uuid.o 00:04:21.243 LIB libspdk_vfio_user.a 00:04:21.243 CC lib/util/xor.o 00:04:21.243 SO libspdk_vfio_user.so.5.0 00:04:21.243 CC lib/util/zipf.o 00:04:21.243 CC lib/util/md5.o 00:04:21.243 SYMLINK libspdk_vfio_user.so 00:04:21.243 LIB libspdk_util.a 00:04:21.243 SO libspdk_util.so.10.1 00:04:21.243 LIB libspdk_trace_parser.a 00:04:21.243 SO libspdk_trace_parser.so.6.0 00:04:21.243 SYMLINK libspdk_util.so 00:04:21.243 SYMLINK libspdk_trace_parser.so 00:04:21.243 CC lib/vmd/vmd.o 00:04:21.243 CC lib/vmd/led.o 00:04:21.243 CC lib/rdma_utils/rdma_utils.o 00:04:21.243 CC lib/conf/conf.o 00:04:21.243 CC lib/json/json_parse.o 00:04:21.243 CC lib/json/json_util.o 00:04:21.243 CC lib/json/json_write.o 00:04:21.243 CC lib/env_dpdk/env.o 00:04:21.243 CC lib/env_dpdk/memory.o 00:04:21.243 CC lib/idxd/idxd.o 00:04:21.243 CC lib/idxd/idxd_user.o 00:04:21.243 CC lib/idxd/idxd_kernel.o 00:04:21.243 LIB libspdk_conf.a 00:04:21.243 LIB libspdk_rdma_utils.a 00:04:21.243 CC lib/env_dpdk/pci.o 00:04:21.243 SO libspdk_conf.so.6.0 00:04:21.243 LIB libspdk_json.a 00:04:21.243 SO libspdk_rdma_utils.so.1.0 00:04:21.243 SO libspdk_json.so.6.0 00:04:21.243 SYMLINK libspdk_conf.so 00:04:21.243 CC lib/env_dpdk/init.o 00:04:21.243 SYMLINK libspdk_rdma_utils.so 00:04:21.243 CC lib/env_dpdk/threads.o 00:04:21.243 SYMLINK libspdk_json.so 00:04:21.243 CC lib/env_dpdk/pci_ioat.o 00:04:21.243 CC lib/env_dpdk/pci_virtio.o 00:04:21.243 CC lib/env_dpdk/pci_vmd.o 00:04:21.243 CC lib/rdma_provider/common.o 00:04:21.243 CC lib/jsonrpc/jsonrpc_server.o 00:04:21.243 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:21.243 CC lib/jsonrpc/jsonrpc_client.o 00:04:21.243 CC lib/env_dpdk/pci_idxd.o 00:04:21.243 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:21.243 LIB libspdk_vmd.a 00:04:21.243 CC lib/env_dpdk/pci_event.o 00:04:21.243 LIB libspdk_idxd.a 00:04:21.244 CC lib/env_dpdk/sigbus_handler.o 00:04:21.244 SO libspdk_vmd.so.6.0 00:04:21.244 SO libspdk_idxd.so.12.1 00:04:21.244 CC lib/env_dpdk/pci_dpdk.o 00:04:21.244 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:21.244 SYMLINK libspdk_vmd.so 00:04:21.244 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:21.244 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:21.244 SYMLINK libspdk_idxd.so 00:04:21.244 LIB libspdk_rdma_provider.a 00:04:21.244 SO libspdk_rdma_provider.so.7.0 00:04:21.244 LIB libspdk_jsonrpc.a 00:04:21.244 SYMLINK libspdk_rdma_provider.so 00:04:21.244 SO libspdk_jsonrpc.so.6.0 00:04:21.244 SYMLINK libspdk_jsonrpc.so 00:04:21.502 CC lib/rpc/rpc.o 00:04:21.762 LIB libspdk_rpc.a 00:04:21.762 SO libspdk_rpc.so.6.0 00:04:21.762 LIB libspdk_env_dpdk.a 00:04:21.762 SYMLINK libspdk_rpc.so 00:04:21.762 SO libspdk_env_dpdk.so.15.1 00:04:22.021 SYMLINK libspdk_env_dpdk.so 00:04:22.021 CC lib/notify/notify.o 00:04:22.021 CC lib/notify/notify_rpc.o 00:04:22.021 CC lib/keyring/keyring_rpc.o 00:04:22.021 CC lib/keyring/keyring.o 00:04:22.021 CC lib/trace/trace.o 00:04:22.021 CC lib/trace/trace_flags.o 00:04:22.021 CC lib/trace/trace_rpc.o 00:04:22.279 LIB libspdk_notify.a 00:04:22.279 SO libspdk_notify.so.6.0 00:04:22.279 LIB libspdk_keyring.a 00:04:22.279 SO libspdk_keyring.so.2.0 00:04:22.279 SYMLINK libspdk_notify.so 00:04:22.279 LIB libspdk_trace.a 00:04:22.279 SO libspdk_trace.so.11.0 00:04:22.279 SYMLINK libspdk_keyring.so 00:04:22.538 SYMLINK libspdk_trace.so 00:04:22.797 CC lib/thread/iobuf.o 00:04:22.797 CC lib/sock/sock_rpc.o 00:04:22.797 CC lib/sock/sock.o 00:04:22.797 CC lib/thread/thread.o 00:04:23.364 LIB libspdk_sock.a 00:04:23.364 SO libspdk_sock.so.10.0 00:04:23.364 SYMLINK libspdk_sock.so 00:04:23.622 CC lib/nvme/nvme_ctrlr_cmd.o 00:04:23.622 CC lib/nvme/nvme_ctrlr.o 00:04:23.622 CC lib/nvme/nvme_fabric.o 00:04:23.622 CC lib/nvme/nvme_ns_cmd.o 00:04:23.622 CC lib/nvme/nvme_ns.o 00:04:23.622 CC lib/nvme/nvme_pcie_common.o 00:04:23.622 CC lib/nvme/nvme_pcie.o 00:04:23.622 CC lib/nvme/nvme_qpair.o 00:04:23.622 CC lib/nvme/nvme.o 00:04:24.557 CC lib/nvme/nvme_quirks.o 00:04:24.557 CC lib/nvme/nvme_transport.o 00:04:24.557 CC lib/nvme/nvme_discovery.o 00:04:24.557 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:04:24.557 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:04:24.815 LIB libspdk_thread.a 00:04:24.815 SO libspdk_thread.so.11.0 00:04:24.815 CC lib/nvme/nvme_tcp.o 00:04:24.815 CC lib/nvme/nvme_opal.o 00:04:24.815 SYMLINK libspdk_thread.so 00:04:24.815 CC lib/nvme/nvme_io_msg.o 00:04:25.074 CC lib/nvme/nvme_poll_group.o 00:04:25.074 CC lib/nvme/nvme_zns.o 00:04:25.074 CC lib/nvme/nvme_stubs.o 00:04:25.332 CC lib/nvme/nvme_auth.o 00:04:25.332 CC lib/nvme/nvme_cuse.o 00:04:25.332 CC lib/nvme/nvme_rdma.o 00:04:25.591 CC lib/accel/accel.o 00:04:25.591 CC lib/blob/blobstore.o 00:04:25.591 CC lib/blob/request.o 00:04:25.849 CC lib/accel/accel_rpc.o 00:04:25.849 CC lib/init/json_config.o 00:04:25.849 CC lib/init/subsystem.o 00:04:26.108 CC lib/init/subsystem_rpc.o 00:04:26.108 CC lib/accel/accel_sw.o 00:04:26.108 CC lib/init/rpc.o 00:04:26.367 CC lib/blob/zeroes.o 00:04:26.367 LIB libspdk_init.a 00:04:26.367 SO libspdk_init.so.6.0 00:04:26.625 CC lib/fsdev/fsdev.o 00:04:26.625 CC lib/virtio/virtio.o 00:04:26.625 SYMLINK libspdk_init.so 00:04:26.625 CC lib/virtio/virtio_vhost_user.o 00:04:26.625 CC lib/virtio/virtio_vfio_user.o 00:04:26.625 CC lib/virtio/virtio_pci.o 00:04:26.625 CC lib/fsdev/fsdev_io.o 00:04:26.883 CC lib/event/app.o 00:04:26.883 CC lib/blob/blob_bs_dev.o 00:04:26.883 CC lib/event/reactor.o 00:04:26.883 CC lib/event/log_rpc.o 00:04:26.883 LIB libspdk_virtio.a 00:04:26.883 LIB libspdk_accel.a 00:04:26.883 SO libspdk_virtio.so.7.0 00:04:27.142 SO libspdk_accel.so.16.0 00:04:27.142 CC lib/event/app_rpc.o 00:04:27.142 SYMLINK libspdk_virtio.so 00:04:27.142 CC lib/event/scheduler_static.o 00:04:27.142 CC lib/fsdev/fsdev_rpc.o 00:04:27.142 SYMLINK libspdk_accel.so 00:04:27.142 LIB libspdk_nvme.a 00:04:27.142 CC lib/bdev/bdev.o 00:04:27.142 CC lib/bdev/bdev_zone.o 00:04:27.142 CC lib/bdev/part.o 00:04:27.142 CC lib/bdev/bdev_rpc.o 00:04:27.401 LIB libspdk_fsdev.a 00:04:27.401 CC lib/bdev/scsi_nvme.o 00:04:27.401 SO libspdk_nvme.so.15.0 00:04:27.401 SO libspdk_fsdev.so.2.0 00:04:27.402 LIB libspdk_event.a 00:04:27.402 SYMLINK libspdk_fsdev.so 00:04:27.402 SO libspdk_event.so.14.0 00:04:27.660 SYMLINK libspdk_event.so 00:04:27.660 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:04:27.660 SYMLINK libspdk_nvme.so 00:04:28.597 LIB libspdk_fuse_dispatcher.a 00:04:28.597 SO libspdk_fuse_dispatcher.so.1.0 00:04:28.597 SYMLINK libspdk_fuse_dispatcher.so 00:04:29.974 LIB libspdk_blob.a 00:04:29.974 SO libspdk_blob.so.12.0 00:04:30.232 SYMLINK libspdk_blob.so 00:04:30.498 CC lib/blobfs/blobfs.o 00:04:30.498 CC lib/blobfs/tree.o 00:04:30.498 CC lib/lvol/lvol.o 00:04:31.078 LIB libspdk_bdev.a 00:04:31.078 SO libspdk_bdev.so.17.0 00:04:31.337 SYMLINK libspdk_bdev.so 00:04:31.337 CC lib/ublk/ublk.o 00:04:31.337 CC lib/ublk/ublk_rpc.o 00:04:31.337 CC lib/ftl/ftl_core.o 00:04:31.337 CC lib/ftl/ftl_init.o 00:04:31.337 CC lib/ftl/ftl_layout.o 00:04:31.337 CC lib/nvmf/ctrlr.o 00:04:31.337 CC lib/nbd/nbd.o 00:04:31.337 CC lib/scsi/dev.o 00:04:31.595 LIB libspdk_blobfs.a 00:04:31.595 SO libspdk_blobfs.so.11.0 00:04:31.595 LIB libspdk_lvol.a 00:04:31.595 CC lib/scsi/lun.o 00:04:31.595 SO libspdk_lvol.so.11.0 00:04:31.595 SYMLINK libspdk_blobfs.so 00:04:31.595 CC lib/scsi/port.o 00:04:31.595 CC lib/scsi/scsi.o 00:04:31.853 SYMLINK libspdk_lvol.so 00:04:31.853 CC lib/nbd/nbd_rpc.o 00:04:31.853 CC lib/ftl/ftl_debug.o 00:04:31.853 CC lib/ftl/ftl_io.o 00:04:31.853 CC lib/scsi/scsi_bdev.o 00:04:31.853 CC lib/ftl/ftl_sb.o 00:04:31.853 CC lib/nvmf/ctrlr_discovery.o 00:04:31.853 CC lib/nvmf/ctrlr_bdev.o 00:04:32.110 CC lib/nvmf/subsystem.o 00:04:32.110 LIB libspdk_nbd.a 00:04:32.110 SO libspdk_nbd.so.7.0 00:04:32.110 CC lib/scsi/scsi_pr.o 00:04:32.110 SYMLINK libspdk_nbd.so 00:04:32.110 CC lib/scsi/scsi_rpc.o 00:04:32.110 CC lib/scsi/task.o 00:04:32.110 CC lib/ftl/ftl_l2p.o 00:04:32.367 LIB libspdk_ublk.a 00:04:32.367 CC lib/ftl/ftl_l2p_flat.o 00:04:32.367 SO libspdk_ublk.so.3.0 00:04:32.367 CC lib/nvmf/nvmf.o 00:04:32.367 SYMLINK libspdk_ublk.so 00:04:32.367 CC lib/nvmf/nvmf_rpc.o 00:04:32.367 CC lib/ftl/ftl_nv_cache.o 00:04:32.367 CC lib/ftl/ftl_band.o 00:04:32.626 LIB libspdk_scsi.a 00:04:32.626 CC lib/ftl/ftl_band_ops.o 00:04:32.626 CC lib/nvmf/transport.o 00:04:32.626 SO libspdk_scsi.so.9.0 00:04:32.626 SYMLINK libspdk_scsi.so 00:04:32.626 CC lib/nvmf/tcp.o 00:04:32.884 CC lib/ftl/ftl_writer.o 00:04:32.884 CC lib/ftl/ftl_rq.o 00:04:33.141 CC lib/iscsi/conn.o 00:04:33.141 CC lib/iscsi/init_grp.o 00:04:33.141 CC lib/iscsi/iscsi.o 00:04:33.398 CC lib/ftl/ftl_reloc.o 00:04:33.398 CC lib/ftl/ftl_l2p_cache.o 00:04:33.398 CC lib/ftl/ftl_p2l.o 00:04:33.398 CC lib/nvmf/stubs.o 00:04:33.398 CC lib/nvmf/mdns_server.o 00:04:33.655 CC lib/nvmf/rdma.o 00:04:33.913 CC lib/iscsi/param.o 00:04:33.913 CC lib/ftl/ftl_p2l_log.o 00:04:33.913 CC lib/iscsi/portal_grp.o 00:04:33.913 CC lib/vhost/vhost.o 00:04:33.913 CC lib/iscsi/tgt_node.o 00:04:33.913 CC lib/iscsi/iscsi_subsystem.o 00:04:34.170 CC lib/iscsi/iscsi_rpc.o 00:04:34.170 CC lib/ftl/mngt/ftl_mngt.o 00:04:34.170 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:04:34.170 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:04:34.429 CC lib/nvmf/auth.o 00:04:34.429 CC lib/ftl/mngt/ftl_mngt_startup.o 00:04:34.429 CC lib/ftl/mngt/ftl_mngt_md.o 00:04:34.429 CC lib/iscsi/task.o 00:04:34.687 CC lib/vhost/vhost_rpc.o 00:04:34.687 CC lib/vhost/vhost_scsi.o 00:04:34.687 CC lib/vhost/vhost_blk.o 00:04:34.687 CC lib/vhost/rte_vhost_user.o 00:04:34.687 CC lib/ftl/mngt/ftl_mngt_misc.o 00:04:34.944 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:04:34.944 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:04:35.201 LIB libspdk_iscsi.a 00:04:35.201 CC lib/ftl/mngt/ftl_mngt_band.o 00:04:35.201 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:04:35.201 SO libspdk_iscsi.so.8.0 00:04:35.201 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:04:35.201 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:04:35.459 SYMLINK libspdk_iscsi.so 00:04:35.459 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:04:35.459 CC lib/ftl/utils/ftl_conf.o 00:04:35.459 CC lib/ftl/utils/ftl_md.o 00:04:35.459 CC lib/ftl/utils/ftl_mempool.o 00:04:35.459 CC lib/ftl/utils/ftl_bitmap.o 00:04:35.717 CC lib/ftl/utils/ftl_property.o 00:04:35.717 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:04:35.717 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:04:35.717 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:04:35.717 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:04:35.717 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:04:35.974 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:04:35.974 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:04:35.974 CC lib/ftl/upgrade/ftl_sb_v3.o 00:04:35.974 CC lib/ftl/upgrade/ftl_sb_v5.o 00:04:35.974 CC lib/ftl/nvc/ftl_nvc_dev.o 00:04:35.974 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:04:35.974 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:04:35.974 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:04:35.974 CC lib/ftl/base/ftl_base_dev.o 00:04:35.974 CC lib/ftl/base/ftl_base_bdev.o 00:04:36.232 CC lib/ftl/ftl_trace.o 00:04:36.232 LIB libspdk_vhost.a 00:04:36.232 SO libspdk_vhost.so.8.0 00:04:36.491 SYMLINK libspdk_vhost.so 00:04:36.491 LIB libspdk_ftl.a 00:04:36.491 LIB libspdk_nvmf.a 00:04:36.749 SO libspdk_nvmf.so.20.0 00:04:36.749 SO libspdk_ftl.so.9.0 00:04:37.007 SYMLINK libspdk_nvmf.so 00:04:37.007 SYMLINK libspdk_ftl.so 00:04:37.333 CC module/env_dpdk/env_dpdk_rpc.o 00:04:37.333 CC module/scheduler/dynamic/scheduler_dynamic.o 00:04:37.333 CC module/accel/error/accel_error.o 00:04:37.333 CC module/fsdev/aio/fsdev_aio.o 00:04:37.333 CC module/sock/posix/posix.o 00:04:37.333 CC module/accel/dsa/accel_dsa.o 00:04:37.333 CC module/accel/iaa/accel_iaa.o 00:04:37.333 CC module/accel/ioat/accel_ioat.o 00:04:37.333 CC module/keyring/file/keyring.o 00:04:37.333 CC module/blob/bdev/blob_bdev.o 00:04:37.333 LIB libspdk_env_dpdk_rpc.a 00:04:37.593 SO libspdk_env_dpdk_rpc.so.6.0 00:04:37.593 SYMLINK libspdk_env_dpdk_rpc.so 00:04:37.593 CC module/fsdev/aio/fsdev_aio_rpc.o 00:04:37.593 CC module/keyring/file/keyring_rpc.o 00:04:37.593 LIB libspdk_scheduler_dynamic.a 00:04:37.593 CC module/accel/ioat/accel_ioat_rpc.o 00:04:37.593 CC module/accel/error/accel_error_rpc.o 00:04:37.593 CC module/accel/iaa/accel_iaa_rpc.o 00:04:37.593 SO libspdk_scheduler_dynamic.so.4.0 00:04:37.593 CC module/accel/dsa/accel_dsa_rpc.o 00:04:37.593 LIB libspdk_keyring_file.a 00:04:37.850 SYMLINK libspdk_scheduler_dynamic.so 00:04:37.850 LIB libspdk_blob_bdev.a 00:04:37.850 SO libspdk_keyring_file.so.2.0 00:04:37.850 SO libspdk_blob_bdev.so.12.0 00:04:37.850 LIB libspdk_accel_ioat.a 00:04:37.850 LIB libspdk_accel_iaa.a 00:04:37.850 LIB libspdk_accel_error.a 00:04:37.850 SO libspdk_accel_ioat.so.6.0 00:04:37.850 SO libspdk_accel_iaa.so.3.0 00:04:37.850 SO libspdk_accel_error.so.2.0 00:04:37.850 SYMLINK libspdk_keyring_file.so 00:04:37.850 SYMLINK libspdk_blob_bdev.so 00:04:37.850 LIB libspdk_accel_dsa.a 00:04:37.850 SYMLINK libspdk_accel_ioat.so 00:04:37.850 CC module/fsdev/aio/linux_aio_mgr.o 00:04:37.850 SYMLINK libspdk_accel_error.so 00:04:37.850 SO libspdk_accel_dsa.so.5.0 00:04:37.850 SYMLINK libspdk_accel_iaa.so 00:04:37.850 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:04:37.850 CC module/scheduler/gscheduler/gscheduler.o 00:04:37.850 SYMLINK libspdk_accel_dsa.so 00:04:38.109 CC module/keyring/linux/keyring.o 00:04:38.109 LIB libspdk_scheduler_gscheduler.a 00:04:38.109 LIB libspdk_scheduler_dpdk_governor.a 00:04:38.109 SO libspdk_scheduler_gscheduler.so.4.0 00:04:38.109 SO libspdk_scheduler_dpdk_governor.so.4.0 00:04:38.109 CC module/bdev/error/vbdev_error.o 00:04:38.109 CC module/blobfs/bdev/blobfs_bdev.o 00:04:38.109 CC module/bdev/gpt/gpt.o 00:04:38.109 CC module/bdev/delay/vbdev_delay.o 00:04:38.109 SYMLINK libspdk_scheduler_dpdk_governor.so 00:04:38.109 CC module/bdev/delay/vbdev_delay_rpc.o 00:04:38.109 SYMLINK libspdk_scheduler_gscheduler.so 00:04:38.109 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:04:38.109 CC module/keyring/linux/keyring_rpc.o 00:04:38.407 LIB libspdk_fsdev_aio.a 00:04:38.407 CC module/bdev/lvol/vbdev_lvol.o 00:04:38.407 SO libspdk_fsdev_aio.so.1.0 00:04:38.407 LIB libspdk_sock_posix.a 00:04:38.407 LIB libspdk_keyring_linux.a 00:04:38.407 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:04:38.407 SO libspdk_sock_posix.so.6.0 00:04:38.407 SO libspdk_keyring_linux.so.1.0 00:04:38.407 LIB libspdk_blobfs_bdev.a 00:04:38.407 CC module/bdev/gpt/vbdev_gpt.o 00:04:38.407 SYMLINK libspdk_fsdev_aio.so 00:04:38.407 SO libspdk_blobfs_bdev.so.6.0 00:04:38.407 SYMLINK libspdk_keyring_linux.so 00:04:38.407 CC module/bdev/error/vbdev_error_rpc.o 00:04:38.407 SYMLINK libspdk_sock_posix.so 00:04:38.407 SYMLINK libspdk_blobfs_bdev.so 00:04:38.665 CC module/bdev/malloc/bdev_malloc.o 00:04:38.665 CC module/bdev/null/bdev_null.o 00:04:38.665 LIB libspdk_bdev_delay.a 00:04:38.665 LIB libspdk_bdev_error.a 00:04:38.665 CC module/bdev/passthru/vbdev_passthru.o 00:04:38.665 CC module/bdev/nvme/bdev_nvme.o 00:04:38.666 CC module/bdev/raid/bdev_raid.o 00:04:38.666 SO libspdk_bdev_error.so.6.0 00:04:38.666 SO libspdk_bdev_delay.so.6.0 00:04:38.666 LIB libspdk_bdev_gpt.a 00:04:38.666 SYMLINK libspdk_bdev_error.so 00:04:38.666 SYMLINK libspdk_bdev_delay.so 00:04:38.666 CC module/bdev/raid/bdev_raid_rpc.o 00:04:38.666 CC module/bdev/raid/bdev_raid_sb.o 00:04:38.666 SO libspdk_bdev_gpt.so.6.0 00:04:38.924 CC module/bdev/malloc/bdev_malloc_rpc.o 00:04:38.924 SYMLINK libspdk_bdev_gpt.so 00:04:38.924 CC module/bdev/raid/raid0.o 00:04:38.924 CC module/bdev/null/bdev_null_rpc.o 00:04:38.924 LIB libspdk_bdev_lvol.a 00:04:38.924 SO libspdk_bdev_lvol.so.6.0 00:04:38.924 LIB libspdk_bdev_malloc.a 00:04:38.924 SYMLINK libspdk_bdev_lvol.so 00:04:38.924 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:04:39.182 SO libspdk_bdev_malloc.so.6.0 00:04:39.182 CC module/bdev/raid/raid1.o 00:04:39.182 LIB libspdk_bdev_null.a 00:04:39.182 SO libspdk_bdev_null.so.6.0 00:04:39.182 SYMLINK libspdk_bdev_malloc.so 00:04:39.182 SYMLINK libspdk_bdev_null.so 00:04:39.182 CC module/bdev/split/vbdev_split.o 00:04:39.182 CC module/bdev/zone_block/vbdev_zone_block.o 00:04:39.182 CC module/bdev/aio/bdev_aio.o 00:04:39.182 LIB libspdk_bdev_passthru.a 00:04:39.182 SO libspdk_bdev_passthru.so.6.0 00:04:39.441 CC module/bdev/iscsi/bdev_iscsi.o 00:04:39.441 CC module/bdev/ftl/bdev_ftl.o 00:04:39.441 CC module/bdev/virtio/bdev_virtio_scsi.o 00:04:39.441 SYMLINK libspdk_bdev_passthru.so 00:04:39.441 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:04:39.441 CC module/bdev/aio/bdev_aio_rpc.o 00:04:39.441 CC module/bdev/split/vbdev_split_rpc.o 00:04:39.441 CC module/bdev/nvme/bdev_nvme_rpc.o 00:04:39.699 CC module/bdev/nvme/nvme_rpc.o 00:04:39.699 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:04:39.699 LIB libspdk_bdev_aio.a 00:04:39.699 LIB libspdk_bdev_split.a 00:04:39.699 SO libspdk_bdev_aio.so.6.0 00:04:39.699 CC module/bdev/ftl/bdev_ftl_rpc.o 00:04:39.699 SO libspdk_bdev_split.so.6.0 00:04:39.699 SYMLINK libspdk_bdev_aio.so 00:04:39.699 CC module/bdev/raid/concat.o 00:04:39.699 SYMLINK libspdk_bdev_split.so 00:04:39.699 CC module/bdev/raid/raid5f.o 00:04:39.699 LIB libspdk_bdev_iscsi.a 00:04:39.699 LIB libspdk_bdev_zone_block.a 00:04:39.699 SO libspdk_bdev_iscsi.so.6.0 00:04:39.699 SO libspdk_bdev_zone_block.so.6.0 00:04:39.959 SYMLINK libspdk_bdev_iscsi.so 00:04:39.959 CC module/bdev/virtio/bdev_virtio_blk.o 00:04:39.959 CC module/bdev/nvme/bdev_mdns_client.o 00:04:39.959 SYMLINK libspdk_bdev_zone_block.so 00:04:39.959 CC module/bdev/nvme/vbdev_opal.o 00:04:39.959 LIB libspdk_bdev_ftl.a 00:04:39.959 CC module/bdev/nvme/vbdev_opal_rpc.o 00:04:39.959 CC module/bdev/virtio/bdev_virtio_rpc.o 00:04:39.959 SO libspdk_bdev_ftl.so.6.0 00:04:39.959 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:04:39.959 SYMLINK libspdk_bdev_ftl.so 00:04:40.238 LIB libspdk_bdev_virtio.a 00:04:40.238 SO libspdk_bdev_virtio.so.6.0 00:04:40.238 SYMLINK libspdk_bdev_virtio.so 00:04:40.497 LIB libspdk_bdev_raid.a 00:04:40.497 SO libspdk_bdev_raid.so.6.0 00:04:40.497 SYMLINK libspdk_bdev_raid.so 00:04:42.408 LIB libspdk_bdev_nvme.a 00:04:42.408 SO libspdk_bdev_nvme.so.7.1 00:04:42.408 SYMLINK libspdk_bdev_nvme.so 00:04:42.974 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:04:42.974 CC module/event/subsystems/sock/sock.o 00:04:42.974 CC module/event/subsystems/fsdev/fsdev.o 00:04:42.974 CC module/event/subsystems/vmd/vmd.o 00:04:42.974 CC module/event/subsystems/vmd/vmd_rpc.o 00:04:42.974 CC module/event/subsystems/keyring/keyring.o 00:04:42.974 CC module/event/subsystems/iobuf/iobuf.o 00:04:42.974 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:04:42.974 CC module/event/subsystems/scheduler/scheduler.o 00:04:42.974 LIB libspdk_event_sock.a 00:04:42.974 LIB libspdk_event_keyring.a 00:04:42.974 LIB libspdk_event_vhost_blk.a 00:04:42.974 LIB libspdk_event_fsdev.a 00:04:42.974 LIB libspdk_event_vmd.a 00:04:42.974 LIB libspdk_event_scheduler.a 00:04:42.974 SO libspdk_event_sock.so.5.0 00:04:42.974 SO libspdk_event_vhost_blk.so.3.0 00:04:42.974 SO libspdk_event_keyring.so.1.0 00:04:42.974 SO libspdk_event_fsdev.so.1.0 00:04:42.974 LIB libspdk_event_iobuf.a 00:04:42.974 SO libspdk_event_scheduler.so.4.0 00:04:42.974 SO libspdk_event_vmd.so.6.0 00:04:42.974 SYMLINK libspdk_event_sock.so 00:04:42.974 SO libspdk_event_iobuf.so.3.0 00:04:42.974 SYMLINK libspdk_event_keyring.so 00:04:42.974 SYMLINK libspdk_event_vhost_blk.so 00:04:42.974 SYMLINK libspdk_event_fsdev.so 00:04:43.233 SYMLINK libspdk_event_scheduler.so 00:04:43.233 SYMLINK libspdk_event_vmd.so 00:04:43.233 SYMLINK libspdk_event_iobuf.so 00:04:43.492 CC module/event/subsystems/accel/accel.o 00:04:43.492 LIB libspdk_event_accel.a 00:04:43.751 SO libspdk_event_accel.so.6.0 00:04:43.751 SYMLINK libspdk_event_accel.so 00:04:44.010 CC module/event/subsystems/bdev/bdev.o 00:04:44.268 LIB libspdk_event_bdev.a 00:04:44.268 SO libspdk_event_bdev.so.6.0 00:04:44.268 SYMLINK libspdk_event_bdev.so 00:04:44.527 CC module/event/subsystems/nbd/nbd.o 00:04:44.527 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:04:44.527 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:04:44.527 CC module/event/subsystems/scsi/scsi.o 00:04:44.527 CC module/event/subsystems/ublk/ublk.o 00:04:44.786 LIB libspdk_event_nbd.a 00:04:44.786 LIB libspdk_event_ublk.a 00:04:44.786 LIB libspdk_event_scsi.a 00:04:44.786 SO libspdk_event_nbd.so.6.0 00:04:44.786 SO libspdk_event_ublk.so.3.0 00:04:44.786 SO libspdk_event_scsi.so.6.0 00:04:44.786 SYMLINK libspdk_event_nbd.so 00:04:44.786 SYMLINK libspdk_event_ublk.so 00:04:44.786 LIB libspdk_event_nvmf.a 00:04:44.786 SYMLINK libspdk_event_scsi.so 00:04:44.786 SO libspdk_event_nvmf.so.6.0 00:04:45.045 SYMLINK libspdk_event_nvmf.so 00:04:45.045 CC module/event/subsystems/iscsi/iscsi.o 00:04:45.045 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:45.304 LIB libspdk_event_vhost_scsi.a 00:04:45.304 LIB libspdk_event_iscsi.a 00:04:45.304 SO libspdk_event_iscsi.so.6.0 00:04:45.304 SO libspdk_event_vhost_scsi.so.3.0 00:04:45.304 SYMLINK libspdk_event_iscsi.so 00:04:45.304 SYMLINK libspdk_event_vhost_scsi.so 00:04:45.562 SO libspdk.so.6.0 00:04:45.562 SYMLINK libspdk.so 00:04:45.821 TEST_HEADER include/spdk/accel.h 00:04:45.821 TEST_HEADER include/spdk/accel_module.h 00:04:45.821 TEST_HEADER include/spdk/assert.h 00:04:45.821 TEST_HEADER include/spdk/barrier.h 00:04:45.821 TEST_HEADER include/spdk/base64.h 00:04:45.821 CC test/rpc_client/rpc_client_test.o 00:04:45.821 CXX app/trace/trace.o 00:04:45.821 TEST_HEADER include/spdk/bdev.h 00:04:45.821 CC app/trace_record/trace_record.o 00:04:45.821 TEST_HEADER include/spdk/bdev_module.h 00:04:45.821 TEST_HEADER include/spdk/bdev_zone.h 00:04:45.821 TEST_HEADER include/spdk/bit_array.h 00:04:45.821 TEST_HEADER include/spdk/bit_pool.h 00:04:45.821 TEST_HEADER include/spdk/blob_bdev.h 00:04:45.821 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:45.821 TEST_HEADER include/spdk/blobfs.h 00:04:45.821 TEST_HEADER include/spdk/blob.h 00:04:45.821 TEST_HEADER include/spdk/conf.h 00:04:45.821 TEST_HEADER include/spdk/config.h 00:04:45.821 TEST_HEADER include/spdk/cpuset.h 00:04:45.821 TEST_HEADER include/spdk/crc16.h 00:04:45.821 TEST_HEADER include/spdk/crc32.h 00:04:45.821 TEST_HEADER include/spdk/crc64.h 00:04:45.821 TEST_HEADER include/spdk/dif.h 00:04:45.821 TEST_HEADER include/spdk/dma.h 00:04:45.821 TEST_HEADER include/spdk/endian.h 00:04:45.821 TEST_HEADER include/spdk/env_dpdk.h 00:04:45.821 TEST_HEADER include/spdk/env.h 00:04:45.821 TEST_HEADER include/spdk/event.h 00:04:45.821 TEST_HEADER include/spdk/fd_group.h 00:04:45.821 TEST_HEADER include/spdk/fd.h 00:04:45.821 TEST_HEADER include/spdk/file.h 00:04:45.821 TEST_HEADER include/spdk/fsdev.h 00:04:45.821 TEST_HEADER include/spdk/fsdev_module.h 00:04:45.821 CC test/thread/poller_perf/poller_perf.o 00:04:45.821 CC app/nvmf_tgt/nvmf_main.o 00:04:45.821 TEST_HEADER include/spdk/ftl.h 00:04:45.821 TEST_HEADER include/spdk/fuse_dispatcher.h 00:04:45.821 TEST_HEADER include/spdk/gpt_spec.h 00:04:45.821 TEST_HEADER include/spdk/hexlify.h 00:04:45.821 TEST_HEADER include/spdk/histogram_data.h 00:04:45.821 TEST_HEADER include/spdk/idxd.h 00:04:45.821 TEST_HEADER include/spdk/idxd_spec.h 00:04:45.821 CC examples/util/zipf/zipf.o 00:04:45.821 TEST_HEADER include/spdk/init.h 00:04:45.821 TEST_HEADER include/spdk/ioat.h 00:04:45.821 TEST_HEADER include/spdk/ioat_spec.h 00:04:45.821 TEST_HEADER include/spdk/iscsi_spec.h 00:04:45.821 TEST_HEADER include/spdk/json.h 00:04:45.821 TEST_HEADER include/spdk/jsonrpc.h 00:04:45.821 TEST_HEADER include/spdk/keyring.h 00:04:45.821 TEST_HEADER include/spdk/keyring_module.h 00:04:45.821 TEST_HEADER include/spdk/likely.h 00:04:45.821 TEST_HEADER include/spdk/log.h 00:04:45.821 TEST_HEADER include/spdk/lvol.h 00:04:45.821 TEST_HEADER include/spdk/md5.h 00:04:45.821 CC test/dma/test_dma/test_dma.o 00:04:45.821 TEST_HEADER include/spdk/memory.h 00:04:46.080 TEST_HEADER include/spdk/mmio.h 00:04:46.080 TEST_HEADER include/spdk/nbd.h 00:04:46.080 CC test/app/bdev_svc/bdev_svc.o 00:04:46.080 TEST_HEADER include/spdk/net.h 00:04:46.080 TEST_HEADER include/spdk/notify.h 00:04:46.080 TEST_HEADER include/spdk/nvme.h 00:04:46.080 TEST_HEADER include/spdk/nvme_intel.h 00:04:46.080 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:46.080 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:46.080 TEST_HEADER include/spdk/nvme_spec.h 00:04:46.080 TEST_HEADER include/spdk/nvme_zns.h 00:04:46.080 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:46.080 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:46.080 TEST_HEADER include/spdk/nvmf.h 00:04:46.080 TEST_HEADER include/spdk/nvmf_spec.h 00:04:46.080 TEST_HEADER include/spdk/nvmf_transport.h 00:04:46.080 TEST_HEADER include/spdk/opal.h 00:04:46.080 TEST_HEADER include/spdk/opal_spec.h 00:04:46.080 CC test/env/mem_callbacks/mem_callbacks.o 00:04:46.080 TEST_HEADER include/spdk/pci_ids.h 00:04:46.080 TEST_HEADER include/spdk/pipe.h 00:04:46.080 TEST_HEADER include/spdk/queue.h 00:04:46.080 TEST_HEADER include/spdk/reduce.h 00:04:46.080 TEST_HEADER include/spdk/rpc.h 00:04:46.080 TEST_HEADER include/spdk/scheduler.h 00:04:46.080 TEST_HEADER include/spdk/scsi.h 00:04:46.080 TEST_HEADER include/spdk/scsi_spec.h 00:04:46.080 TEST_HEADER include/spdk/sock.h 00:04:46.080 TEST_HEADER include/spdk/stdinc.h 00:04:46.080 LINK rpc_client_test 00:04:46.080 TEST_HEADER include/spdk/string.h 00:04:46.080 TEST_HEADER include/spdk/thread.h 00:04:46.080 TEST_HEADER include/spdk/trace.h 00:04:46.080 TEST_HEADER include/spdk/trace_parser.h 00:04:46.080 TEST_HEADER include/spdk/tree.h 00:04:46.080 TEST_HEADER include/spdk/ublk.h 00:04:46.080 TEST_HEADER include/spdk/util.h 00:04:46.080 TEST_HEADER include/spdk/uuid.h 00:04:46.080 TEST_HEADER include/spdk/version.h 00:04:46.080 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:46.080 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:46.080 TEST_HEADER include/spdk/vhost.h 00:04:46.080 TEST_HEADER include/spdk/vmd.h 00:04:46.080 TEST_HEADER include/spdk/xor.h 00:04:46.080 TEST_HEADER include/spdk/zipf.h 00:04:46.080 CXX test/cpp_headers/accel.o 00:04:46.080 LINK poller_perf 00:04:46.080 LINK nvmf_tgt 00:04:46.080 LINK zipf 00:04:46.080 LINK spdk_trace_record 00:04:46.080 LINK bdev_svc 00:04:46.337 CXX test/cpp_headers/accel_module.o 00:04:46.337 CXX test/cpp_headers/assert.o 00:04:46.337 CC test/env/vtophys/vtophys.o 00:04:46.337 LINK spdk_trace 00:04:46.337 LINK vtophys 00:04:46.337 CXX test/cpp_headers/barrier.o 00:04:46.595 CC examples/ioat/perf/perf.o 00:04:46.596 CC examples/idxd/perf/perf.o 00:04:46.596 CC test/app/histogram_perf/histogram_perf.o 00:04:46.596 CC examples/vmd/lsvmd/lsvmd.o 00:04:46.596 LINK test_dma 00:04:46.596 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:46.596 CC app/iscsi_tgt/iscsi_tgt.o 00:04:46.596 CXX test/cpp_headers/base64.o 00:04:46.596 LINK lsvmd 00:04:46.596 LINK histogram_perf 00:04:46.596 LINK mem_callbacks 00:04:46.853 LINK ioat_perf 00:04:46.853 CC test/event/event_perf/event_perf.o 00:04:46.853 CXX test/cpp_headers/bdev.o 00:04:46.853 CC examples/vmd/led/led.o 00:04:46.853 LINK iscsi_tgt 00:04:46.853 CC test/app/jsoncat/jsoncat.o 00:04:46.853 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:46.853 LINK event_perf 00:04:46.853 LINK idxd_perf 00:04:46.853 CC test/app/stub/stub.o 00:04:47.111 CC examples/ioat/verify/verify.o 00:04:47.111 CXX test/cpp_headers/bdev_module.o 00:04:47.111 LINK led 00:04:47.111 LINK nvme_fuzz 00:04:47.111 LINK jsoncat 00:04:47.111 LINK env_dpdk_post_init 00:04:47.111 CC test/event/reactor/reactor.o 00:04:47.111 LINK stub 00:04:47.111 CC app/spdk_tgt/spdk_tgt.o 00:04:47.397 CXX test/cpp_headers/bdev_zone.o 00:04:47.397 LINK verify 00:04:47.397 CC test/accel/dif/dif.o 00:04:47.397 CC test/env/memory/memory_ut.o 00:04:47.397 LINK reactor 00:04:47.397 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:47.397 CC app/spdk_lspci/spdk_lspci.o 00:04:47.397 CC test/blobfs/mkfs/mkfs.o 00:04:47.397 CC test/env/pci/pci_ut.o 00:04:47.397 CXX test/cpp_headers/bit_array.o 00:04:47.397 LINK spdk_tgt 00:04:47.655 LINK spdk_lspci 00:04:47.655 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:47.655 CC test/event/reactor_perf/reactor_perf.o 00:04:47.655 LINK mkfs 00:04:47.655 CXX test/cpp_headers/bit_pool.o 00:04:47.914 LINK reactor_perf 00:04:47.914 CC app/spdk_nvme_perf/perf.o 00:04:47.914 CC test/event/app_repeat/app_repeat.o 00:04:47.914 LINK interrupt_tgt 00:04:47.914 CXX test/cpp_headers/blob_bdev.o 00:04:47.914 CXX test/cpp_headers/blobfs_bdev.o 00:04:47.914 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:47.914 LINK app_repeat 00:04:48.173 LINK pci_ut 00:04:48.173 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:48.173 CXX test/cpp_headers/blobfs.o 00:04:48.173 LINK dif 00:04:48.173 CC examples/thread/thread/thread_ex.o 00:04:48.173 CC test/event/scheduler/scheduler.o 00:04:48.431 CXX test/cpp_headers/blob.o 00:04:48.431 CC test/lvol/esnap/esnap.o 00:04:48.431 CC app/spdk_nvme_identify/identify.o 00:04:48.431 CXX test/cpp_headers/conf.o 00:04:48.431 CXX test/cpp_headers/config.o 00:04:48.431 LINK thread 00:04:48.690 LINK scheduler 00:04:48.690 CXX test/cpp_headers/cpuset.o 00:04:48.690 LINK vhost_fuzz 00:04:48.690 CC examples/sock/hello_world/hello_sock.o 00:04:48.690 LINK memory_ut 00:04:48.690 CXX test/cpp_headers/crc16.o 00:04:48.949 CC app/spdk_nvme_discover/discovery_aer.o 00:04:48.949 CXX test/cpp_headers/crc32.o 00:04:48.949 CC app/spdk_top/spdk_top.o 00:04:48.949 LINK spdk_nvme_perf 00:04:48.949 CXX test/cpp_headers/crc64.o 00:04:49.207 LINK spdk_nvme_discover 00:04:49.207 LINK hello_sock 00:04:49.207 CC test/nvme/aer/aer.o 00:04:49.207 CC app/vhost/vhost.o 00:04:49.207 CC app/spdk_dd/spdk_dd.o 00:04:49.207 CXX test/cpp_headers/dif.o 00:04:49.465 CXX test/cpp_headers/dma.o 00:04:49.465 LINK vhost 00:04:49.465 CC examples/accel/perf/accel_perf.o 00:04:49.465 LINK aer 00:04:49.465 LINK spdk_nvme_identify 00:04:49.466 CXX test/cpp_headers/endian.o 00:04:49.724 LINK spdk_dd 00:04:49.724 LINK iscsi_fuzz 00:04:49.724 CXX test/cpp_headers/env_dpdk.o 00:04:49.724 CC app/fio/nvme/fio_plugin.o 00:04:49.724 CC app/fio/bdev/fio_plugin.o 00:04:49.724 CC test/nvme/reset/reset.o 00:04:49.724 CC test/nvme/sgl/sgl.o 00:04:49.982 CXX test/cpp_headers/env.o 00:04:49.982 CXX test/cpp_headers/event.o 00:04:49.982 CXX test/cpp_headers/fd_group.o 00:04:50.240 LINK accel_perf 00:04:50.240 CC test/bdev/bdevio/bdevio.o 00:04:50.240 LINK sgl 00:04:50.240 LINK reset 00:04:50.240 LINK spdk_top 00:04:50.240 CC test/nvme/e2edp/nvme_dp.o 00:04:50.240 CXX test/cpp_headers/fd.o 00:04:50.498 CXX test/cpp_headers/file.o 00:04:50.498 CC test/nvme/overhead/overhead.o 00:04:50.498 CC test/nvme/err_injection/err_injection.o 00:04:50.498 LINK spdk_nvme 00:04:50.498 CC test/nvme/startup/startup.o 00:04:50.498 CC examples/blob/hello_world/hello_blob.o 00:04:50.498 LINK spdk_bdev 00:04:50.498 CXX test/cpp_headers/fsdev.o 00:04:50.498 LINK nvme_dp 00:04:50.756 LINK bdevio 00:04:50.756 LINK err_injection 00:04:50.756 LINK startup 00:04:50.756 CC examples/nvme/hello_world/hello_world.o 00:04:50.756 LINK overhead 00:04:50.756 CC examples/nvme/reconnect/reconnect.o 00:04:50.756 CXX test/cpp_headers/fsdev_module.o 00:04:50.756 CXX test/cpp_headers/ftl.o 00:04:50.756 CXX test/cpp_headers/fuse_dispatcher.o 00:04:50.756 LINK hello_blob 00:04:50.756 CXX test/cpp_headers/gpt_spec.o 00:04:51.015 CXX test/cpp_headers/hexlify.o 00:04:51.015 CXX test/cpp_headers/histogram_data.o 00:04:51.015 CC test/nvme/reserve/reserve.o 00:04:51.015 LINK hello_world 00:04:51.015 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:51.272 CC examples/nvme/arbitration/arbitration.o 00:04:51.272 CC examples/blob/cli/blobcli.o 00:04:51.272 CXX test/cpp_headers/idxd.o 00:04:51.272 CXX test/cpp_headers/idxd_spec.o 00:04:51.272 LINK reserve 00:04:51.272 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:51.272 LINK reconnect 00:04:51.272 CC examples/bdev/hello_world/hello_bdev.o 00:04:51.531 CXX test/cpp_headers/init.o 00:04:51.531 CC test/nvme/simple_copy/simple_copy.o 00:04:51.531 LINK arbitration 00:04:51.531 CC test/nvme/connect_stress/connect_stress.o 00:04:51.531 CXX test/cpp_headers/ioat.o 00:04:51.531 CC examples/bdev/bdevperf/bdevperf.o 00:04:51.531 LINK hello_bdev 00:04:51.788 LINK hello_fsdev 00:04:51.788 LINK nvme_manage 00:04:51.788 LINK blobcli 00:04:51.788 LINK connect_stress 00:04:51.788 CXX test/cpp_headers/ioat_spec.o 00:04:51.788 LINK simple_copy 00:04:51.788 CC examples/nvme/hotplug/hotplug.o 00:04:52.046 CC test/nvme/boot_partition/boot_partition.o 00:04:52.046 CC test/nvme/compliance/nvme_compliance.o 00:04:52.046 CXX test/cpp_headers/iscsi_spec.o 00:04:52.046 CXX test/cpp_headers/json.o 00:04:52.046 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:52.046 CC test/nvme/fused_ordering/fused_ordering.o 00:04:52.046 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:52.046 LINK boot_partition 00:04:52.046 LINK hotplug 00:04:52.306 CXX test/cpp_headers/jsonrpc.o 00:04:52.306 LINK cmb_copy 00:04:52.306 CC test/nvme/fdp/fdp.o 00:04:52.306 LINK doorbell_aers 00:04:52.306 CXX test/cpp_headers/keyring.o 00:04:52.306 LINK fused_ordering 00:04:52.306 CC test/nvme/cuse/cuse.o 00:04:52.306 CXX test/cpp_headers/keyring_module.o 00:04:52.306 LINK nvme_compliance 00:04:52.565 CC examples/nvme/abort/abort.o 00:04:52.565 CXX test/cpp_headers/likely.o 00:04:52.565 CXX test/cpp_headers/log.o 00:04:52.565 CXX test/cpp_headers/lvol.o 00:04:52.565 CXX test/cpp_headers/md5.o 00:04:52.565 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:52.565 LINK bdevperf 00:04:52.565 CXX test/cpp_headers/memory.o 00:04:52.565 CXX test/cpp_headers/mmio.o 00:04:52.823 LINK fdp 00:04:52.823 CXX test/cpp_headers/nbd.o 00:04:52.823 CXX test/cpp_headers/net.o 00:04:52.823 CXX test/cpp_headers/notify.o 00:04:52.823 CXX test/cpp_headers/nvme.o 00:04:52.823 CXX test/cpp_headers/nvme_intel.o 00:04:52.823 CXX test/cpp_headers/nvme_ocssd.o 00:04:52.823 LINK pmr_persistence 00:04:52.823 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:52.823 CXX test/cpp_headers/nvme_spec.o 00:04:53.082 LINK abort 00:04:53.082 CXX test/cpp_headers/nvme_zns.o 00:04:53.082 CXX test/cpp_headers/nvmf_cmd.o 00:04:53.082 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:53.082 CXX test/cpp_headers/nvmf.o 00:04:53.082 CXX test/cpp_headers/nvmf_spec.o 00:04:53.082 CXX test/cpp_headers/nvmf_transport.o 00:04:53.082 CXX test/cpp_headers/opal.o 00:04:53.341 CXX test/cpp_headers/opal_spec.o 00:04:53.341 CXX test/cpp_headers/pci_ids.o 00:04:53.341 CXX test/cpp_headers/pipe.o 00:04:53.341 CXX test/cpp_headers/queue.o 00:04:53.341 CXX test/cpp_headers/reduce.o 00:04:53.341 CXX test/cpp_headers/rpc.o 00:04:53.341 CXX test/cpp_headers/scheduler.o 00:04:53.341 CXX test/cpp_headers/scsi.o 00:04:53.341 CC examples/nvmf/nvmf/nvmf.o 00:04:53.341 CXX test/cpp_headers/scsi_spec.o 00:04:53.341 CXX test/cpp_headers/sock.o 00:04:53.341 CXX test/cpp_headers/stdinc.o 00:04:53.341 CXX test/cpp_headers/string.o 00:04:53.601 CXX test/cpp_headers/thread.o 00:04:53.601 CXX test/cpp_headers/trace.o 00:04:53.601 CXX test/cpp_headers/trace_parser.o 00:04:53.601 CXX test/cpp_headers/tree.o 00:04:53.601 CXX test/cpp_headers/ublk.o 00:04:53.601 CXX test/cpp_headers/util.o 00:04:53.601 CXX test/cpp_headers/uuid.o 00:04:53.601 CXX test/cpp_headers/version.o 00:04:53.601 CXX test/cpp_headers/vfio_user_pci.o 00:04:53.601 CXX test/cpp_headers/vfio_user_spec.o 00:04:53.601 CXX test/cpp_headers/vhost.o 00:04:53.601 LINK nvmf 00:04:53.601 CXX test/cpp_headers/vmd.o 00:04:53.860 CXX test/cpp_headers/xor.o 00:04:53.860 CXX test/cpp_headers/zipf.o 00:04:54.119 LINK cuse 00:04:55.498 LINK esnap 00:04:56.068 00:04:56.068 real 1m33.149s 00:04:56.068 user 8m45.432s 00:04:56.068 sys 1m40.392s 00:04:56.068 14:04:33 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:56.068 14:04:33 make -- common/autotest_common.sh@10 -- $ set +x 00:04:56.068 ************************************ 00:04:56.068 END TEST make 00:04:56.068 ************************************ 00:04:56.068 14:04:33 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:56.068 14:04:33 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:56.068 14:04:33 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:56.068 14:04:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:56.068 14:04:33 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:04:56.068 14:04:33 -- pm/common@44 -- $ pid=5256 00:04:56.068 14:04:33 -- pm/common@50 -- $ kill -TERM 5256 00:04:56.068 14:04:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:56.068 14:04:33 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:04:56.068 14:04:33 -- pm/common@44 -- $ pid=5257 00:04:56.068 14:04:33 -- pm/common@50 -- $ kill -TERM 5257 00:04:56.068 14:04:33 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:56.068 14:04:33 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:04:56.327 14:04:33 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:56.327 14:04:33 -- common/autotest_common.sh@1693 -- # lcov --version 00:04:56.327 14:04:33 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:56.327 14:04:33 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:56.327 14:04:33 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:56.327 14:04:33 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:56.327 14:04:33 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:56.327 14:04:33 -- scripts/common.sh@336 -- # IFS=.-: 00:04:56.327 14:04:33 -- scripts/common.sh@336 -- # read -ra ver1 00:04:56.327 14:04:33 -- scripts/common.sh@337 -- # IFS=.-: 00:04:56.327 14:04:33 -- scripts/common.sh@337 -- # read -ra ver2 00:04:56.327 14:04:33 -- scripts/common.sh@338 -- # local 'op=<' 00:04:56.327 14:04:33 -- scripts/common.sh@340 -- # ver1_l=2 00:04:56.327 14:04:33 -- scripts/common.sh@341 -- # ver2_l=1 00:04:56.327 14:04:33 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:56.327 14:04:33 -- scripts/common.sh@344 -- # case "$op" in 00:04:56.327 14:04:33 -- scripts/common.sh@345 -- # : 1 00:04:56.327 14:04:33 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:56.327 14:04:33 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:56.327 14:04:33 -- scripts/common.sh@365 -- # decimal 1 00:04:56.327 14:04:33 -- scripts/common.sh@353 -- # local d=1 00:04:56.327 14:04:33 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:56.327 14:04:33 -- scripts/common.sh@355 -- # echo 1 00:04:56.327 14:04:33 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:56.327 14:04:33 -- scripts/common.sh@366 -- # decimal 2 00:04:56.327 14:04:33 -- scripts/common.sh@353 -- # local d=2 00:04:56.327 14:04:33 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:56.327 14:04:33 -- scripts/common.sh@355 -- # echo 2 00:04:56.327 14:04:33 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:56.327 14:04:33 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:56.327 14:04:33 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:56.327 14:04:33 -- scripts/common.sh@368 -- # return 0 00:04:56.327 14:04:33 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:56.327 14:04:33 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:56.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.327 --rc genhtml_branch_coverage=1 00:04:56.327 --rc genhtml_function_coverage=1 00:04:56.327 --rc genhtml_legend=1 00:04:56.327 --rc geninfo_all_blocks=1 00:04:56.327 --rc geninfo_unexecuted_blocks=1 00:04:56.327 00:04:56.327 ' 00:04:56.327 14:04:33 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:56.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.327 --rc genhtml_branch_coverage=1 00:04:56.327 --rc genhtml_function_coverage=1 00:04:56.327 --rc genhtml_legend=1 00:04:56.327 --rc geninfo_all_blocks=1 00:04:56.327 --rc geninfo_unexecuted_blocks=1 00:04:56.327 00:04:56.327 ' 00:04:56.327 14:04:33 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:56.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.327 --rc genhtml_branch_coverage=1 00:04:56.327 --rc genhtml_function_coverage=1 00:04:56.327 --rc genhtml_legend=1 00:04:56.327 --rc geninfo_all_blocks=1 00:04:56.327 --rc geninfo_unexecuted_blocks=1 00:04:56.327 00:04:56.327 ' 00:04:56.327 14:04:33 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:56.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:56.327 --rc genhtml_branch_coverage=1 00:04:56.327 --rc genhtml_function_coverage=1 00:04:56.327 --rc genhtml_legend=1 00:04:56.327 --rc geninfo_all_blocks=1 00:04:56.327 --rc geninfo_unexecuted_blocks=1 00:04:56.327 00:04:56.327 ' 00:04:56.327 14:04:33 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:56.327 14:04:33 -- nvmf/common.sh@7 -- # uname -s 00:04:56.327 14:04:33 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:56.327 14:04:33 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:56.327 14:04:33 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:56.327 14:04:33 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:56.327 14:04:33 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:56.327 14:04:33 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:56.327 14:04:33 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:56.327 14:04:33 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:56.328 14:04:33 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:56.328 14:04:33 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:56.328 14:04:33 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5c5f7f81-f6ef-45c0-af5d-fb790bbde370 00:04:56.328 14:04:33 -- nvmf/common.sh@18 -- # NVME_HOSTID=5c5f7f81-f6ef-45c0-af5d-fb790bbde370 00:04:56.328 14:04:33 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:56.328 14:04:33 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:56.328 14:04:33 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:56.328 14:04:33 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:56.328 14:04:33 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:56.328 14:04:33 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:56.328 14:04:33 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:56.328 14:04:33 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:56.328 14:04:33 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:56.328 14:04:33 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.328 14:04:33 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.328 14:04:33 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.328 14:04:33 -- paths/export.sh@5 -- # export PATH 00:04:56.328 14:04:33 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.328 14:04:33 -- nvmf/common.sh@51 -- # : 0 00:04:56.328 14:04:33 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:56.328 14:04:33 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:56.328 14:04:33 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:56.328 14:04:33 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:56.328 14:04:33 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:56.328 14:04:33 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:56.328 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:56.328 14:04:33 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:56.328 14:04:33 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:56.328 14:04:33 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:56.328 14:04:33 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:56.328 14:04:33 -- spdk/autotest.sh@32 -- # uname -s 00:04:56.328 14:04:33 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:56.328 14:04:33 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:56.328 14:04:33 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:56.328 14:04:33 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:04:56.328 14:04:33 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:04:56.328 14:04:33 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:56.328 14:04:33 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:56.328 14:04:33 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:56.328 14:04:33 -- spdk/autotest.sh@48 -- # udevadm_pid=54269 00:04:56.328 14:04:33 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:56.328 14:04:33 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:56.328 14:04:33 -- pm/common@17 -- # local monitor 00:04:56.328 14:04:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:56.328 14:04:33 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:56.328 14:04:33 -- pm/common@25 -- # sleep 1 00:04:56.328 14:04:33 -- pm/common@21 -- # date +%s 00:04:56.328 14:04:33 -- pm/common@21 -- # date +%s 00:04:56.328 14:04:33 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732716273 00:04:56.328 14:04:33 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1732716273 00:04:56.328 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732716273_collect-cpu-load.pm.log 00:04:56.328 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1732716273_collect-vmstat.pm.log 00:04:57.703 14:04:34 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:57.703 14:04:34 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:57.703 14:04:34 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:57.703 14:04:34 -- common/autotest_common.sh@10 -- # set +x 00:04:57.704 14:04:34 -- spdk/autotest.sh@59 -- # create_test_list 00:04:57.704 14:04:34 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:57.704 14:04:34 -- common/autotest_common.sh@10 -- # set +x 00:04:57.704 14:04:34 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:04:57.704 14:04:34 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:04:57.704 14:04:34 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:04:57.704 14:04:34 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:04:57.704 14:04:34 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:04:57.704 14:04:34 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:57.704 14:04:34 -- common/autotest_common.sh@1457 -- # uname 00:04:57.704 14:04:34 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:57.704 14:04:34 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:57.704 14:04:34 -- common/autotest_common.sh@1477 -- # uname 00:04:57.704 14:04:34 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:57.704 14:04:34 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:57.704 14:04:34 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:57.704 lcov: LCOV version 1.15 00:04:57.704 14:04:34 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:05:15.803 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:15.803 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:05:30.685 14:05:07 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:30.685 14:05:07 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:30.685 14:05:07 -- common/autotest_common.sh@10 -- # set +x 00:05:30.685 14:05:07 -- spdk/autotest.sh@78 -- # rm -f 00:05:30.685 14:05:07 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:31.622 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:31.622 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:05:31.622 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:05:31.622 14:05:08 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:31.622 14:05:08 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:31.622 14:05:08 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:31.622 14:05:08 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:31.622 14:05:08 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:31.622 14:05:08 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:31.622 14:05:08 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:31.622 14:05:08 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:31.622 14:05:08 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:31.622 14:05:08 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:31.622 14:05:08 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n1 00:05:31.622 14:05:08 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:05:31.622 14:05:08 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:05:31.622 14:05:08 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:31.622 14:05:08 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:31.622 14:05:08 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n2 00:05:31.622 14:05:08 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:05:31.622 14:05:08 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:05:31.622 14:05:08 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:31.622 14:05:08 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:31.622 14:05:08 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme1n3 00:05:31.622 14:05:08 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:05:31.622 14:05:08 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:05:31.622 14:05:08 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:31.622 14:05:08 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:31.622 14:05:08 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:31.622 14:05:08 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:31.622 14:05:08 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:31.622 14:05:08 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:31.622 14:05:08 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:31.622 No valid GPT data, bailing 00:05:31.622 14:05:08 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:31.622 14:05:08 -- scripts/common.sh@394 -- # pt= 00:05:31.622 14:05:08 -- scripts/common.sh@395 -- # return 1 00:05:31.622 14:05:08 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:31.622 1+0 records in 00:05:31.622 1+0 records out 00:05:31.622 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00415345 s, 252 MB/s 00:05:31.622 14:05:08 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:31.622 14:05:08 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:31.622 14:05:08 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:05:31.622 14:05:08 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:05:31.622 14:05:08 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:05:31.622 No valid GPT data, bailing 00:05:31.622 14:05:08 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:05:31.622 14:05:08 -- scripts/common.sh@394 -- # pt= 00:05:31.622 14:05:08 -- scripts/common.sh@395 -- # return 1 00:05:31.622 14:05:08 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:05:31.622 1+0 records in 00:05:31.622 1+0 records out 00:05:31.622 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00409218 s, 256 MB/s 00:05:31.622 14:05:08 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:31.622 14:05:08 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:31.622 14:05:08 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:05:31.622 14:05:08 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:05:31.622 14:05:08 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:05:31.880 No valid GPT data, bailing 00:05:31.880 14:05:08 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:05:31.880 14:05:08 -- scripts/common.sh@394 -- # pt= 00:05:31.880 14:05:08 -- scripts/common.sh@395 -- # return 1 00:05:31.880 14:05:08 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:05:31.880 1+0 records in 00:05:31.880 1+0 records out 00:05:31.880 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00517084 s, 203 MB/s 00:05:31.880 14:05:08 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:31.880 14:05:08 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:31.880 14:05:08 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:05:31.880 14:05:08 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:05:31.880 14:05:08 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:05:31.880 No valid GPT data, bailing 00:05:31.880 14:05:08 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:05:31.880 14:05:09 -- scripts/common.sh@394 -- # pt= 00:05:31.880 14:05:09 -- scripts/common.sh@395 -- # return 1 00:05:31.880 14:05:09 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:05:31.880 1+0 records in 00:05:31.880 1+0 records out 00:05:31.880 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00435615 s, 241 MB/s 00:05:31.880 14:05:09 -- spdk/autotest.sh@105 -- # sync 00:05:31.880 14:05:09 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:31.880 14:05:09 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:31.880 14:05:09 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:33.788 14:05:11 -- spdk/autotest.sh@111 -- # uname -s 00:05:33.788 14:05:11 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:33.788 14:05:11 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:33.788 14:05:11 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:05:34.723 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:34.723 Hugepages 00:05:34.723 node hugesize free / total 00:05:34.723 node0 1048576kB 0 / 0 00:05:34.723 node0 2048kB 0 / 0 00:05:34.723 00:05:34.723 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:34.723 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:05:34.723 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:05:34.723 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:05:34.723 14:05:11 -- spdk/autotest.sh@117 -- # uname -s 00:05:34.723 14:05:11 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:34.723 14:05:11 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:34.723 14:05:11 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:35.660 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:35.660 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.660 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:35.660 14:05:12 -- common/autotest_common.sh@1517 -- # sleep 1 00:05:36.601 14:05:13 -- common/autotest_common.sh@1518 -- # bdfs=() 00:05:36.601 14:05:13 -- common/autotest_common.sh@1518 -- # local bdfs 00:05:36.601 14:05:13 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:05:36.601 14:05:13 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:05:36.601 14:05:13 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:36.601 14:05:13 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:36.601 14:05:13 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:36.601 14:05:13 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:36.601 14:05:13 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:36.860 14:05:13 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:36.860 14:05:13 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:36.860 14:05:13 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:05:37.118 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:37.118 Waiting for block devices as requested 00:05:37.118 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:05:37.377 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:05:37.377 14:05:14 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:37.377 14:05:14 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:05:37.377 14:05:14 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:05:37.377 14:05:14 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:37.377 14:05:14 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:37.377 14:05:14 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:05:37.377 14:05:14 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:05:37.377 14:05:14 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:05:37.377 14:05:14 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:05:37.377 14:05:14 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:05:37.377 14:05:14 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:05:37.377 14:05:14 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:37.377 14:05:14 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:37.377 14:05:14 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:37.377 14:05:14 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:37.377 14:05:14 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:37.377 14:05:14 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:05:37.377 14:05:14 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:37.377 14:05:14 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:37.377 14:05:14 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:37.377 14:05:14 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:37.377 14:05:14 -- common/autotest_common.sh@1543 -- # continue 00:05:37.377 14:05:14 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:05:37.377 14:05:14 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:05:37.377 14:05:14 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 00:05:37.377 14:05:14 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:05:37.377 14:05:14 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:37.377 14:05:14 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:05:37.377 14:05:14 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:05:37.377 14:05:14 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:05:37.377 14:05:14 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:05:37.377 14:05:14 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:05:37.377 14:05:14 -- common/autotest_common.sh@1531 -- # grep oacs 00:05:37.377 14:05:14 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:05:37.377 14:05:14 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:05:37.377 14:05:14 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:05:37.377 14:05:14 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:05:37.377 14:05:14 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:05:37.377 14:05:14 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:05:37.377 14:05:14 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:05:37.377 14:05:14 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:05:37.377 14:05:14 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:05:37.377 14:05:14 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:05:37.377 14:05:14 -- common/autotest_common.sh@1543 -- # continue 00:05:37.377 14:05:14 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:05:37.377 14:05:14 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:37.377 14:05:14 -- common/autotest_common.sh@10 -- # set +x 00:05:37.377 14:05:14 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:05:37.377 14:05:14 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:37.377 14:05:14 -- common/autotest_common.sh@10 -- # set +x 00:05:37.377 14:05:14 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:05:38.313 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:05:38.313 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:05:38.313 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:05:38.313 14:05:15 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:05:38.313 14:05:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:38.313 14:05:15 -- common/autotest_common.sh@10 -- # set +x 00:05:38.313 14:05:15 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:05:38.313 14:05:15 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:05:38.313 14:05:15 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:05:38.313 14:05:15 -- common/autotest_common.sh@1563 -- # bdfs=() 00:05:38.313 14:05:15 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:05:38.313 14:05:15 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:05:38.313 14:05:15 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:05:38.313 14:05:15 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:05:38.313 14:05:15 -- common/autotest_common.sh@1498 -- # bdfs=() 00:05:38.313 14:05:15 -- common/autotest_common.sh@1498 -- # local bdfs 00:05:38.313 14:05:15 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:38.313 14:05:15 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:38.313 14:05:15 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:05:38.313 14:05:15 -- common/autotest_common.sh@1500 -- # (( 2 == 0 )) 00:05:38.313 14:05:15 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 00:05:38.313 14:05:15 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:38.313 14:05:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:05:38.313 14:05:15 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:38.313 14:05:15 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:38.313 14:05:15 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:05:38.313 14:05:15 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:05:38.313 14:05:15 -- common/autotest_common.sh@1566 -- # device=0x0010 00:05:38.313 14:05:15 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:05:38.313 14:05:15 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:05:38.313 14:05:15 -- common/autotest_common.sh@1572 -- # return 0 00:05:38.313 14:05:15 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:05:38.313 14:05:15 -- common/autotest_common.sh@1580 -- # return 0 00:05:38.313 14:05:15 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:38.313 14:05:15 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:38.313 14:05:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:38.313 14:05:15 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:38.313 14:05:15 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:38.313 14:05:15 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:38.313 14:05:15 -- common/autotest_common.sh@10 -- # set +x 00:05:38.313 14:05:15 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:38.313 14:05:15 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:38.313 14:05:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.313 14:05:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.313 14:05:15 -- common/autotest_common.sh@10 -- # set +x 00:05:38.572 ************************************ 00:05:38.572 START TEST env 00:05:38.572 ************************************ 00:05:38.572 14:05:15 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:05:38.572 * Looking for test storage... 00:05:38.572 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:05:38.572 14:05:15 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:38.572 14:05:15 env -- common/autotest_common.sh@1693 -- # lcov --version 00:05:38.572 14:05:15 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:38.572 14:05:15 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:38.572 14:05:15 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.572 14:05:15 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.572 14:05:15 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.572 14:05:15 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.572 14:05:15 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.572 14:05:15 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.572 14:05:15 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.572 14:05:15 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.572 14:05:15 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.572 14:05:15 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.572 14:05:15 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.572 14:05:15 env -- scripts/common.sh@344 -- # case "$op" in 00:05:38.572 14:05:15 env -- scripts/common.sh@345 -- # : 1 00:05:38.572 14:05:15 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.572 14:05:15 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.572 14:05:15 env -- scripts/common.sh@365 -- # decimal 1 00:05:38.572 14:05:15 env -- scripts/common.sh@353 -- # local d=1 00:05:38.572 14:05:15 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.572 14:05:15 env -- scripts/common.sh@355 -- # echo 1 00:05:38.572 14:05:15 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.572 14:05:15 env -- scripts/common.sh@366 -- # decimal 2 00:05:38.572 14:05:15 env -- scripts/common.sh@353 -- # local d=2 00:05:38.572 14:05:15 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.572 14:05:15 env -- scripts/common.sh@355 -- # echo 2 00:05:38.572 14:05:15 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.572 14:05:15 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.572 14:05:15 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.572 14:05:15 env -- scripts/common.sh@368 -- # return 0 00:05:38.572 14:05:15 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.572 14:05:15 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:38.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.572 --rc genhtml_branch_coverage=1 00:05:38.572 --rc genhtml_function_coverage=1 00:05:38.572 --rc genhtml_legend=1 00:05:38.572 --rc geninfo_all_blocks=1 00:05:38.572 --rc geninfo_unexecuted_blocks=1 00:05:38.572 00:05:38.572 ' 00:05:38.572 14:05:15 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:38.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.572 --rc genhtml_branch_coverage=1 00:05:38.572 --rc genhtml_function_coverage=1 00:05:38.572 --rc genhtml_legend=1 00:05:38.572 --rc geninfo_all_blocks=1 00:05:38.572 --rc geninfo_unexecuted_blocks=1 00:05:38.572 00:05:38.572 ' 00:05:38.572 14:05:15 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:38.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.572 --rc genhtml_branch_coverage=1 00:05:38.572 --rc genhtml_function_coverage=1 00:05:38.572 --rc genhtml_legend=1 00:05:38.572 --rc geninfo_all_blocks=1 00:05:38.572 --rc geninfo_unexecuted_blocks=1 00:05:38.572 00:05:38.572 ' 00:05:38.572 14:05:15 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:38.572 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.572 --rc genhtml_branch_coverage=1 00:05:38.572 --rc genhtml_function_coverage=1 00:05:38.572 --rc genhtml_legend=1 00:05:38.572 --rc geninfo_all_blocks=1 00:05:38.572 --rc geninfo_unexecuted_blocks=1 00:05:38.572 00:05:38.572 ' 00:05:38.572 14:05:15 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:38.572 14:05:15 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:38.572 14:05:15 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:38.572 14:05:15 env -- common/autotest_common.sh@10 -- # set +x 00:05:38.572 ************************************ 00:05:38.572 START TEST env_memory 00:05:38.572 ************************************ 00:05:38.572 14:05:15 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:05:38.572 00:05:38.572 00:05:38.572 CUnit - A unit testing framework for C - Version 2.1-3 00:05:38.572 http://cunit.sourceforge.net/ 00:05:38.572 00:05:38.572 00:05:38.572 Suite: memory 00:05:38.831 Test: alloc and free memory map ...[2024-11-27 14:05:15.869959] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:38.831 passed 00:05:38.831 Test: mem map translation ...[2024-11-27 14:05:15.932942] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:38.831 [2024-11-27 14:05:15.933095] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:38.831 [2024-11-27 14:05:15.933244] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:38.831 [2024-11-27 14:05:15.933304] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:38.831 passed 00:05:38.831 Test: mem map registration ...[2024-11-27 14:05:16.033472] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:38.831 [2024-11-27 14:05:16.033597] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:38.831 passed 00:05:39.090 Test: mem map adjacent registrations ...passed 00:05:39.090 00:05:39.090 Run Summary: Type Total Ran Passed Failed Inactive 00:05:39.090 suites 1 1 n/a 0 0 00:05:39.090 tests 4 4 4 0 0 00:05:39.090 asserts 152 152 152 0 n/a 00:05:39.090 00:05:39.090 Elapsed time = 0.349 seconds 00:05:39.090 00:05:39.090 real 0m0.389s 00:05:39.090 user 0m0.351s 00:05:39.090 sys 0m0.029s 00:05:39.090 14:05:16 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:39.090 14:05:16 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:39.090 ************************************ 00:05:39.090 END TEST env_memory 00:05:39.090 ************************************ 00:05:39.090 14:05:16 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:39.090 14:05:16 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:39.090 14:05:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:39.090 14:05:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:39.090 ************************************ 00:05:39.090 START TEST env_vtophys 00:05:39.090 ************************************ 00:05:39.090 14:05:16 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:05:39.090 EAL: lib.eal log level changed from notice to debug 00:05:39.090 EAL: Detected lcore 0 as core 0 on socket 0 00:05:39.090 EAL: Detected lcore 1 as core 0 on socket 0 00:05:39.090 EAL: Detected lcore 2 as core 0 on socket 0 00:05:39.090 EAL: Detected lcore 3 as core 0 on socket 0 00:05:39.090 EAL: Detected lcore 4 as core 0 on socket 0 00:05:39.090 EAL: Detected lcore 5 as core 0 on socket 0 00:05:39.090 EAL: Detected lcore 6 as core 0 on socket 0 00:05:39.090 EAL: Detected lcore 7 as core 0 on socket 0 00:05:39.090 EAL: Detected lcore 8 as core 0 on socket 0 00:05:39.090 EAL: Detected lcore 9 as core 0 on socket 0 00:05:39.090 EAL: Maximum logical cores by configuration: 128 00:05:39.090 EAL: Detected CPU lcores: 10 00:05:39.090 EAL: Detected NUMA nodes: 1 00:05:39.090 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:05:39.090 EAL: Detected shared linkage of DPDK 00:05:39.090 EAL: No shared files mode enabled, IPC will be disabled 00:05:39.090 EAL: Selected IOVA mode 'PA' 00:05:39.090 EAL: Probing VFIO support... 00:05:39.090 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:39.090 EAL: VFIO modules not loaded, skipping VFIO support... 00:05:39.090 EAL: Ask a virtual area of 0x2e000 bytes 00:05:39.090 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:39.090 EAL: Setting up physically contiguous memory... 00:05:39.090 EAL: Setting maximum number of open files to 524288 00:05:39.090 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:39.090 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:39.090 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.090 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:39.090 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:39.090 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.090 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:39.090 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:39.090 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.090 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:39.090 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:39.090 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.090 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:39.090 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:39.090 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.090 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:39.090 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:39.090 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.090 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:39.090 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:39.090 EAL: Ask a virtual area of 0x61000 bytes 00:05:39.090 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:39.090 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:39.090 EAL: Ask a virtual area of 0x400000000 bytes 00:05:39.090 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:39.090 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:39.090 EAL: Hugepages will be freed exactly as allocated. 00:05:39.090 EAL: No shared files mode enabled, IPC is disabled 00:05:39.090 EAL: No shared files mode enabled, IPC is disabled 00:05:39.348 EAL: TSC frequency is ~2200000 KHz 00:05:39.348 EAL: Main lcore 0 is ready (tid=7f366c216a40;cpuset=[0]) 00:05:39.348 EAL: Trying to obtain current memory policy. 00:05:39.348 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.348 EAL: Restoring previous memory policy: 0 00:05:39.348 EAL: request: mp_malloc_sync 00:05:39.348 EAL: No shared files mode enabled, IPC is disabled 00:05:39.348 EAL: Heap on socket 0 was expanded by 2MB 00:05:39.348 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:05:39.348 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:39.348 EAL: Mem event callback 'spdk:(nil)' registered 00:05:39.348 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:05:39.348 00:05:39.348 00:05:39.348 CUnit - A unit testing framework for C - Version 2.1-3 00:05:39.348 http://cunit.sourceforge.net/ 00:05:39.348 00:05:39.348 00:05:39.348 Suite: components_suite 00:05:39.913 Test: vtophys_malloc_test ...passed 00:05:39.913 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:39.913 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.913 EAL: Restoring previous memory policy: 4 00:05:39.913 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.913 EAL: request: mp_malloc_sync 00:05:39.913 EAL: No shared files mode enabled, IPC is disabled 00:05:39.913 EAL: Heap on socket 0 was expanded by 4MB 00:05:39.913 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.914 EAL: request: mp_malloc_sync 00:05:39.914 EAL: No shared files mode enabled, IPC is disabled 00:05:39.914 EAL: Heap on socket 0 was shrunk by 4MB 00:05:39.914 EAL: Trying to obtain current memory policy. 00:05:39.914 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.914 EAL: Restoring previous memory policy: 4 00:05:39.914 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.914 EAL: request: mp_malloc_sync 00:05:39.914 EAL: No shared files mode enabled, IPC is disabled 00:05:39.914 EAL: Heap on socket 0 was expanded by 6MB 00:05:39.914 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.914 EAL: request: mp_malloc_sync 00:05:39.914 EAL: No shared files mode enabled, IPC is disabled 00:05:39.914 EAL: Heap on socket 0 was shrunk by 6MB 00:05:39.914 EAL: Trying to obtain current memory policy. 00:05:39.914 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.914 EAL: Restoring previous memory policy: 4 00:05:39.914 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.914 EAL: request: mp_malloc_sync 00:05:39.914 EAL: No shared files mode enabled, IPC is disabled 00:05:39.914 EAL: Heap on socket 0 was expanded by 10MB 00:05:39.914 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.914 EAL: request: mp_malloc_sync 00:05:39.914 EAL: No shared files mode enabled, IPC is disabled 00:05:39.914 EAL: Heap on socket 0 was shrunk by 10MB 00:05:39.914 EAL: Trying to obtain current memory policy. 00:05:39.914 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.914 EAL: Restoring previous memory policy: 4 00:05:39.914 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.914 EAL: request: mp_malloc_sync 00:05:39.914 EAL: No shared files mode enabled, IPC is disabled 00:05:39.914 EAL: Heap on socket 0 was expanded by 18MB 00:05:39.914 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.914 EAL: request: mp_malloc_sync 00:05:39.914 EAL: No shared files mode enabled, IPC is disabled 00:05:39.914 EAL: Heap on socket 0 was shrunk by 18MB 00:05:39.914 EAL: Trying to obtain current memory policy. 00:05:39.914 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:39.914 EAL: Restoring previous memory policy: 4 00:05:39.914 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.914 EAL: request: mp_malloc_sync 00:05:39.914 EAL: No shared files mode enabled, IPC is disabled 00:05:39.914 EAL: Heap on socket 0 was expanded by 34MB 00:05:39.914 EAL: Calling mem event callback 'spdk:(nil)' 00:05:39.914 EAL: request: mp_malloc_sync 00:05:39.914 EAL: No shared files mode enabled, IPC is disabled 00:05:39.914 EAL: Heap on socket 0 was shrunk by 34MB 00:05:39.914 EAL: Trying to obtain current memory policy. 00:05:39.914 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.172 EAL: Restoring previous memory policy: 4 00:05:40.172 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.172 EAL: request: mp_malloc_sync 00:05:40.172 EAL: No shared files mode enabled, IPC is disabled 00:05:40.172 EAL: Heap on socket 0 was expanded by 66MB 00:05:40.172 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.172 EAL: request: mp_malloc_sync 00:05:40.172 EAL: No shared files mode enabled, IPC is disabled 00:05:40.172 EAL: Heap on socket 0 was shrunk by 66MB 00:05:40.172 EAL: Trying to obtain current memory policy. 00:05:40.172 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.172 EAL: Restoring previous memory policy: 4 00:05:40.172 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.172 EAL: request: mp_malloc_sync 00:05:40.172 EAL: No shared files mode enabled, IPC is disabled 00:05:40.172 EAL: Heap on socket 0 was expanded by 130MB 00:05:40.446 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.446 EAL: request: mp_malloc_sync 00:05:40.446 EAL: No shared files mode enabled, IPC is disabled 00:05:40.446 EAL: Heap on socket 0 was shrunk by 130MB 00:05:40.704 EAL: Trying to obtain current memory policy. 00:05:40.704 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:40.704 EAL: Restoring previous memory policy: 4 00:05:40.704 EAL: Calling mem event callback 'spdk:(nil)' 00:05:40.704 EAL: request: mp_malloc_sync 00:05:40.704 EAL: No shared files mode enabled, IPC is disabled 00:05:40.704 EAL: Heap on socket 0 was expanded by 258MB 00:05:41.270 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.270 EAL: request: mp_malloc_sync 00:05:41.270 EAL: No shared files mode enabled, IPC is disabled 00:05:41.270 EAL: Heap on socket 0 was shrunk by 258MB 00:05:41.528 EAL: Trying to obtain current memory policy. 00:05:41.528 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:41.786 EAL: Restoring previous memory policy: 4 00:05:41.786 EAL: Calling mem event callback 'spdk:(nil)' 00:05:41.786 EAL: request: mp_malloc_sync 00:05:41.786 EAL: No shared files mode enabled, IPC is disabled 00:05:41.786 EAL: Heap on socket 0 was expanded by 514MB 00:05:42.721 EAL: Calling mem event callback 'spdk:(nil)' 00:05:42.721 EAL: request: mp_malloc_sync 00:05:42.721 EAL: No shared files mode enabled, IPC is disabled 00:05:42.721 EAL: Heap on socket 0 was shrunk by 514MB 00:05:43.288 EAL: Trying to obtain current memory policy. 00:05:43.288 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:43.546 EAL: Restoring previous memory policy: 4 00:05:43.546 EAL: Calling mem event callback 'spdk:(nil)' 00:05:43.546 EAL: request: mp_malloc_sync 00:05:43.546 EAL: No shared files mode enabled, IPC is disabled 00:05:43.546 EAL: Heap on socket 0 was expanded by 1026MB 00:05:45.449 EAL: Calling mem event callback 'spdk:(nil)' 00:05:45.449 EAL: request: mp_malloc_sync 00:05:45.449 EAL: No shared files mode enabled, IPC is disabled 00:05:45.449 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:46.853 passed 00:05:46.853 00:05:46.853 Run Summary: Type Total Ran Passed Failed Inactive 00:05:46.853 suites 1 1 n/a 0 0 00:05:46.853 tests 2 2 2 0 0 00:05:46.853 asserts 5642 5642 5642 0 n/a 00:05:46.853 00:05:46.853 Elapsed time = 7.504 seconds 00:05:46.853 EAL: Calling mem event callback 'spdk:(nil)' 00:05:46.853 EAL: request: mp_malloc_sync 00:05:46.853 EAL: No shared files mode enabled, IPC is disabled 00:05:46.853 EAL: Heap on socket 0 was shrunk by 2MB 00:05:46.853 EAL: No shared files mode enabled, IPC is disabled 00:05:46.853 EAL: No shared files mode enabled, IPC is disabled 00:05:46.853 EAL: No shared files mode enabled, IPC is disabled 00:05:46.853 00:05:46.853 real 0m7.856s 00:05:46.853 user 0m6.639s 00:05:46.853 sys 0m1.055s 00:05:46.853 14:05:24 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.853 14:05:24 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:46.853 ************************************ 00:05:46.853 END TEST env_vtophys 00:05:46.853 ************************************ 00:05:46.853 14:05:24 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:46.853 14:05:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:46.853 14:05:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.853 14:05:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.112 ************************************ 00:05:47.112 START TEST env_pci 00:05:47.112 ************************************ 00:05:47.112 14:05:24 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:05:47.112 00:05:47.112 00:05:47.112 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.112 http://cunit.sourceforge.net/ 00:05:47.112 00:05:47.112 00:05:47.112 Suite: pci 00:05:47.112 Test: pci_hook ...[2024-11-27 14:05:24.180393] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 56587 has claimed it 00:05:47.112 passed 00:05:47.112 00:05:47.112 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.112 suites 1 1 n/a 0 0 00:05:47.112 tests 1 1 1 0 0 00:05:47.112 asserts 25 25 25 0 n/a 00:05:47.112 00:05:47.112 Elapsed time = 0.008 seconds 00:05:47.112 EAL: Cannot find device (10000:00:01.0) 00:05:47.112 EAL: Failed to attach device on primary process 00:05:47.112 00:05:47.112 real 0m0.084s 00:05:47.112 user 0m0.040s 00:05:47.112 sys 0m0.042s 00:05:47.112 14:05:24 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.112 ************************************ 00:05:47.112 14:05:24 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:47.112 END TEST env_pci 00:05:47.112 ************************************ 00:05:47.112 14:05:24 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:47.112 14:05:24 env -- env/env.sh@15 -- # uname 00:05:47.112 14:05:24 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:47.112 14:05:24 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:47.112 14:05:24 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:47.112 14:05:24 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:47.112 14:05:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.112 14:05:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.112 ************************************ 00:05:47.112 START TEST env_dpdk_post_init 00:05:47.112 ************************************ 00:05:47.112 14:05:24 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:47.112 EAL: Detected CPU lcores: 10 00:05:47.112 EAL: Detected NUMA nodes: 1 00:05:47.112 EAL: Detected shared linkage of DPDK 00:05:47.112 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:47.112 EAL: Selected IOVA mode 'PA' 00:05:47.371 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:47.372 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:05:47.372 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:05:47.372 Starting DPDK initialization... 00:05:47.372 Starting SPDK post initialization... 00:05:47.372 SPDK NVMe probe 00:05:47.372 Attaching to 0000:00:10.0 00:05:47.372 Attaching to 0000:00:11.0 00:05:47.372 Attached to 0000:00:10.0 00:05:47.372 Attached to 0000:00:11.0 00:05:47.372 Cleaning up... 00:05:47.372 00:05:47.372 real 0m0.309s 00:05:47.372 user 0m0.115s 00:05:47.372 sys 0m0.094s 00:05:47.372 14:05:24 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.372 14:05:24 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:47.372 ************************************ 00:05:47.372 END TEST env_dpdk_post_init 00:05:47.372 ************************************ 00:05:47.372 14:05:24 env -- env/env.sh@26 -- # uname 00:05:47.372 14:05:24 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:47.372 14:05:24 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:47.372 14:05:24 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.372 14:05:24 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.372 14:05:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.372 ************************************ 00:05:47.372 START TEST env_mem_callbacks 00:05:47.372 ************************************ 00:05:47.372 14:05:24 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:05:47.630 EAL: Detected CPU lcores: 10 00:05:47.630 EAL: Detected NUMA nodes: 1 00:05:47.630 EAL: Detected shared linkage of DPDK 00:05:47.630 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:47.630 EAL: Selected IOVA mode 'PA' 00:05:47.630 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:47.630 00:05:47.630 00:05:47.630 CUnit - A unit testing framework for C - Version 2.1-3 00:05:47.630 http://cunit.sourceforge.net/ 00:05:47.630 00:05:47.630 00:05:47.631 Suite: memory 00:05:47.631 Test: test ... 00:05:47.631 register 0x200000200000 2097152 00:05:47.631 malloc 3145728 00:05:47.631 register 0x200000400000 4194304 00:05:47.631 buf 0x2000004fffc0 len 3145728 PASSED 00:05:47.631 malloc 64 00:05:47.631 buf 0x2000004ffec0 len 64 PASSED 00:05:47.631 malloc 4194304 00:05:47.631 register 0x200000800000 6291456 00:05:47.631 buf 0x2000009fffc0 len 4194304 PASSED 00:05:47.631 free 0x2000004fffc0 3145728 00:05:47.631 free 0x2000004ffec0 64 00:05:47.631 unregister 0x200000400000 4194304 PASSED 00:05:47.631 free 0x2000009fffc0 4194304 00:05:47.631 unregister 0x200000800000 6291456 PASSED 00:05:47.631 malloc 8388608 00:05:47.631 register 0x200000400000 10485760 00:05:47.631 buf 0x2000005fffc0 len 8388608 PASSED 00:05:47.631 free 0x2000005fffc0 8388608 00:05:47.889 unregister 0x200000400000 10485760 PASSED 00:05:47.889 passed 00:05:47.889 00:05:47.889 Run Summary: Type Total Ran Passed Failed Inactive 00:05:47.889 suites 1 1 n/a 0 0 00:05:47.889 tests 1 1 1 0 0 00:05:47.889 asserts 15 15 15 0 n/a 00:05:47.889 00:05:47.889 Elapsed time = 0.083 seconds 00:05:47.889 00:05:47.889 real 0m0.312s 00:05:47.889 user 0m0.126s 00:05:47.889 sys 0m0.081s 00:05:47.889 14:05:24 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.889 14:05:24 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:47.889 ************************************ 00:05:47.889 END TEST env_mem_callbacks 00:05:47.889 ************************************ 00:05:47.889 00:05:47.889 real 0m9.387s 00:05:47.889 user 0m7.466s 00:05:47.889 sys 0m1.538s 00:05:47.889 14:05:24 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.889 14:05:24 env -- common/autotest_common.sh@10 -- # set +x 00:05:47.889 ************************************ 00:05:47.889 END TEST env 00:05:47.889 ************************************ 00:05:47.889 14:05:25 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:47.889 14:05:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:47.889 14:05:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.889 14:05:25 -- common/autotest_common.sh@10 -- # set +x 00:05:47.889 ************************************ 00:05:47.889 START TEST rpc 00:05:47.889 ************************************ 00:05:47.889 14:05:25 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:05:47.889 * Looking for test storage... 00:05:47.889 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:47.889 14:05:25 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:47.889 14:05:25 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:47.889 14:05:25 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:48.148 14:05:25 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:48.148 14:05:25 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.148 14:05:25 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.148 14:05:25 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.148 14:05:25 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.148 14:05:25 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.148 14:05:25 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.148 14:05:25 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.148 14:05:25 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.148 14:05:25 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.148 14:05:25 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.148 14:05:25 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.148 14:05:25 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:48.148 14:05:25 rpc -- scripts/common.sh@345 -- # : 1 00:05:48.148 14:05:25 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.148 14:05:25 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.148 14:05:25 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:48.148 14:05:25 rpc -- scripts/common.sh@353 -- # local d=1 00:05:48.148 14:05:25 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.148 14:05:25 rpc -- scripts/common.sh@355 -- # echo 1 00:05:48.148 14:05:25 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.148 14:05:25 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:48.148 14:05:25 rpc -- scripts/common.sh@353 -- # local d=2 00:05:48.148 14:05:25 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.148 14:05:25 rpc -- scripts/common.sh@355 -- # echo 2 00:05:48.148 14:05:25 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.148 14:05:25 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.148 14:05:25 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.148 14:05:25 rpc -- scripts/common.sh@368 -- # return 0 00:05:48.148 14:05:25 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.148 14:05:25 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:48.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.148 --rc genhtml_branch_coverage=1 00:05:48.148 --rc genhtml_function_coverage=1 00:05:48.148 --rc genhtml_legend=1 00:05:48.148 --rc geninfo_all_blocks=1 00:05:48.148 --rc geninfo_unexecuted_blocks=1 00:05:48.148 00:05:48.148 ' 00:05:48.148 14:05:25 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:48.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.148 --rc genhtml_branch_coverage=1 00:05:48.148 --rc genhtml_function_coverage=1 00:05:48.148 --rc genhtml_legend=1 00:05:48.148 --rc geninfo_all_blocks=1 00:05:48.148 --rc geninfo_unexecuted_blocks=1 00:05:48.148 00:05:48.148 ' 00:05:48.148 14:05:25 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:48.148 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.148 --rc genhtml_branch_coverage=1 00:05:48.148 --rc genhtml_function_coverage=1 00:05:48.148 --rc genhtml_legend=1 00:05:48.148 --rc geninfo_all_blocks=1 00:05:48.148 --rc geninfo_unexecuted_blocks=1 00:05:48.148 00:05:48.149 ' 00:05:48.149 14:05:25 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:48.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.149 --rc genhtml_branch_coverage=1 00:05:48.149 --rc genhtml_function_coverage=1 00:05:48.149 --rc genhtml_legend=1 00:05:48.149 --rc geninfo_all_blocks=1 00:05:48.149 --rc geninfo_unexecuted_blocks=1 00:05:48.149 00:05:48.149 ' 00:05:48.149 14:05:25 rpc -- rpc/rpc.sh@65 -- # spdk_pid=56718 00:05:48.149 14:05:25 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:48.149 14:05:25 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:05:48.149 14:05:25 rpc -- rpc/rpc.sh@67 -- # waitforlisten 56718 00:05:48.149 14:05:25 rpc -- common/autotest_common.sh@835 -- # '[' -z 56718 ']' 00:05:48.149 14:05:25 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.149 14:05:25 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.149 14:05:25 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.149 14:05:25 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.149 14:05:25 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.149 [2024-11-27 14:05:25.405101] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:05:48.149 [2024-11-27 14:05:25.405282] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56718 ] 00:05:48.407 [2024-11-27 14:05:25.595088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.666 [2024-11-27 14:05:25.748150] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:48.666 [2024-11-27 14:05:25.748219] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 56718' to capture a snapshot of events at runtime. 00:05:48.666 [2024-11-27 14:05:25.748240] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:48.666 [2024-11-27 14:05:25.748259] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:48.666 [2024-11-27 14:05:25.748275] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid56718 for offline analysis/debug. 00:05:48.666 [2024-11-27 14:05:25.749846] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.603 14:05:26 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.603 14:05:26 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:49.603 14:05:26 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:49.603 14:05:26 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:05:49.603 14:05:26 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:49.603 14:05:26 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:49.603 14:05:26 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.603 14:05:26 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.603 14:05:26 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.603 ************************************ 00:05:49.603 START TEST rpc_integrity 00:05:49.603 ************************************ 00:05:49.603 14:05:26 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:49.603 14:05:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:49.603 14:05:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.603 14:05:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.603 14:05:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.603 14:05:26 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:49.603 14:05:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:49.603 14:05:26 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:49.603 14:05:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:49.603 14:05:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.603 14:05:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.603 14:05:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.603 14:05:26 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:49.603 14:05:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:49.603 14:05:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.603 14:05:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.603 14:05:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.603 14:05:26 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:49.603 { 00:05:49.603 "name": "Malloc0", 00:05:49.603 "aliases": [ 00:05:49.603 "3d06b878-a971-4500-b062-a021f3b12d0a" 00:05:49.603 ], 00:05:49.603 "product_name": "Malloc disk", 00:05:49.603 "block_size": 512, 00:05:49.603 "num_blocks": 16384, 00:05:49.603 "uuid": "3d06b878-a971-4500-b062-a021f3b12d0a", 00:05:49.603 "assigned_rate_limits": { 00:05:49.603 "rw_ios_per_sec": 0, 00:05:49.603 "rw_mbytes_per_sec": 0, 00:05:49.603 "r_mbytes_per_sec": 0, 00:05:49.603 "w_mbytes_per_sec": 0 00:05:49.603 }, 00:05:49.603 "claimed": false, 00:05:49.603 "zoned": false, 00:05:49.603 "supported_io_types": { 00:05:49.603 "read": true, 00:05:49.603 "write": true, 00:05:49.603 "unmap": true, 00:05:49.603 "flush": true, 00:05:49.603 "reset": true, 00:05:49.603 "nvme_admin": false, 00:05:49.603 "nvme_io": false, 00:05:49.603 "nvme_io_md": false, 00:05:49.603 "write_zeroes": true, 00:05:49.603 "zcopy": true, 00:05:49.603 "get_zone_info": false, 00:05:49.603 "zone_management": false, 00:05:49.603 "zone_append": false, 00:05:49.603 "compare": false, 00:05:49.603 "compare_and_write": false, 00:05:49.603 "abort": true, 00:05:49.603 "seek_hole": false, 00:05:49.603 "seek_data": false, 00:05:49.603 "copy": true, 00:05:49.603 "nvme_iov_md": false 00:05:49.603 }, 00:05:49.603 "memory_domains": [ 00:05:49.603 { 00:05:49.603 "dma_device_id": "system", 00:05:49.603 "dma_device_type": 1 00:05:49.603 }, 00:05:49.603 { 00:05:49.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.603 "dma_device_type": 2 00:05:49.603 } 00:05:49.603 ], 00:05:49.603 "driver_specific": {} 00:05:49.603 } 00:05:49.603 ]' 00:05:49.603 14:05:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:49.603 14:05:26 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:49.603 14:05:26 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:49.603 14:05:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.603 14:05:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.603 [2024-11-27 14:05:26.809883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:49.603 [2024-11-27 14:05:26.809968] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:49.603 [2024-11-27 14:05:26.810002] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007880 00:05:49.603 [2024-11-27 14:05:26.810026] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:49.603 [2024-11-27 14:05:26.813187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:49.603 [2024-11-27 14:05:26.813244] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:49.603 Passthru0 00:05:49.603 14:05:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.603 14:05:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:49.603 14:05:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.603 14:05:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.603 14:05:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.603 14:05:26 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:49.603 { 00:05:49.603 "name": "Malloc0", 00:05:49.603 "aliases": [ 00:05:49.603 "3d06b878-a971-4500-b062-a021f3b12d0a" 00:05:49.603 ], 00:05:49.603 "product_name": "Malloc disk", 00:05:49.603 "block_size": 512, 00:05:49.603 "num_blocks": 16384, 00:05:49.603 "uuid": "3d06b878-a971-4500-b062-a021f3b12d0a", 00:05:49.603 "assigned_rate_limits": { 00:05:49.603 "rw_ios_per_sec": 0, 00:05:49.603 "rw_mbytes_per_sec": 0, 00:05:49.603 "r_mbytes_per_sec": 0, 00:05:49.603 "w_mbytes_per_sec": 0 00:05:49.603 }, 00:05:49.603 "claimed": true, 00:05:49.603 "claim_type": "exclusive_write", 00:05:49.603 "zoned": false, 00:05:49.603 "supported_io_types": { 00:05:49.603 "read": true, 00:05:49.603 "write": true, 00:05:49.603 "unmap": true, 00:05:49.603 "flush": true, 00:05:49.603 "reset": true, 00:05:49.603 "nvme_admin": false, 00:05:49.603 "nvme_io": false, 00:05:49.603 "nvme_io_md": false, 00:05:49.603 "write_zeroes": true, 00:05:49.603 "zcopy": true, 00:05:49.603 "get_zone_info": false, 00:05:49.603 "zone_management": false, 00:05:49.603 "zone_append": false, 00:05:49.603 "compare": false, 00:05:49.603 "compare_and_write": false, 00:05:49.603 "abort": true, 00:05:49.603 "seek_hole": false, 00:05:49.603 "seek_data": false, 00:05:49.603 "copy": true, 00:05:49.603 "nvme_iov_md": false 00:05:49.603 }, 00:05:49.603 "memory_domains": [ 00:05:49.603 { 00:05:49.603 "dma_device_id": "system", 00:05:49.603 "dma_device_type": 1 00:05:49.603 }, 00:05:49.603 { 00:05:49.603 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.603 "dma_device_type": 2 00:05:49.603 } 00:05:49.603 ], 00:05:49.603 "driver_specific": {} 00:05:49.603 }, 00:05:49.603 { 00:05:49.603 "name": "Passthru0", 00:05:49.603 "aliases": [ 00:05:49.603 "1dc25316-10ef-5cfb-88fb-bb28b00cead5" 00:05:49.603 ], 00:05:49.604 "product_name": "passthru", 00:05:49.604 "block_size": 512, 00:05:49.604 "num_blocks": 16384, 00:05:49.604 "uuid": "1dc25316-10ef-5cfb-88fb-bb28b00cead5", 00:05:49.604 "assigned_rate_limits": { 00:05:49.604 "rw_ios_per_sec": 0, 00:05:49.604 "rw_mbytes_per_sec": 0, 00:05:49.604 "r_mbytes_per_sec": 0, 00:05:49.604 "w_mbytes_per_sec": 0 00:05:49.604 }, 00:05:49.604 "claimed": false, 00:05:49.604 "zoned": false, 00:05:49.604 "supported_io_types": { 00:05:49.604 "read": true, 00:05:49.604 "write": true, 00:05:49.604 "unmap": true, 00:05:49.604 "flush": true, 00:05:49.604 "reset": true, 00:05:49.604 "nvme_admin": false, 00:05:49.604 "nvme_io": false, 00:05:49.604 "nvme_io_md": false, 00:05:49.604 "write_zeroes": true, 00:05:49.604 "zcopy": true, 00:05:49.604 "get_zone_info": false, 00:05:49.604 "zone_management": false, 00:05:49.604 "zone_append": false, 00:05:49.604 "compare": false, 00:05:49.604 "compare_and_write": false, 00:05:49.604 "abort": true, 00:05:49.604 "seek_hole": false, 00:05:49.604 "seek_data": false, 00:05:49.604 "copy": true, 00:05:49.604 "nvme_iov_md": false 00:05:49.604 }, 00:05:49.604 "memory_domains": [ 00:05:49.604 { 00:05:49.604 "dma_device_id": "system", 00:05:49.604 "dma_device_type": 1 00:05:49.604 }, 00:05:49.604 { 00:05:49.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.604 "dma_device_type": 2 00:05:49.604 } 00:05:49.604 ], 00:05:49.604 "driver_specific": { 00:05:49.604 "passthru": { 00:05:49.604 "name": "Passthru0", 00:05:49.604 "base_bdev_name": "Malloc0" 00:05:49.604 } 00:05:49.604 } 00:05:49.604 } 00:05:49.604 ]' 00:05:49.604 14:05:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:49.863 14:05:26 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:49.863 14:05:26 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:49.863 14:05:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.863 14:05:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.863 14:05:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.863 14:05:26 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:49.863 14:05:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.863 14:05:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.863 14:05:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.863 14:05:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:49.863 14:05:26 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.863 14:05:26 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.863 14:05:26 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.863 14:05:26 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:49.863 14:05:26 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:49.863 14:05:27 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:49.863 00:05:49.863 real 0m0.376s 00:05:49.863 user 0m0.239s 00:05:49.863 sys 0m0.038s 00:05:49.863 14:05:27 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.863 ************************************ 00:05:49.863 END TEST rpc_integrity 00:05:49.863 ************************************ 00:05:49.863 14:05:27 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:49.863 14:05:27 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:49.863 14:05:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.863 14:05:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.863 14:05:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.863 ************************************ 00:05:49.863 START TEST rpc_plugins 00:05:49.863 ************************************ 00:05:49.863 14:05:27 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:49.863 14:05:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:49.863 14:05:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.863 14:05:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.863 14:05:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.863 14:05:27 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:49.863 14:05:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:49.863 14:05:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.863 14:05:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:49.863 14:05:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.863 14:05:27 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:49.863 { 00:05:49.863 "name": "Malloc1", 00:05:49.863 "aliases": [ 00:05:49.863 "557032a7-f0e9-4466-897b-d7e306f11385" 00:05:49.863 ], 00:05:49.863 "product_name": "Malloc disk", 00:05:49.863 "block_size": 4096, 00:05:49.863 "num_blocks": 256, 00:05:49.863 "uuid": "557032a7-f0e9-4466-897b-d7e306f11385", 00:05:49.863 "assigned_rate_limits": { 00:05:49.863 "rw_ios_per_sec": 0, 00:05:49.863 "rw_mbytes_per_sec": 0, 00:05:49.863 "r_mbytes_per_sec": 0, 00:05:49.863 "w_mbytes_per_sec": 0 00:05:49.863 }, 00:05:49.863 "claimed": false, 00:05:49.863 "zoned": false, 00:05:49.863 "supported_io_types": { 00:05:49.863 "read": true, 00:05:49.863 "write": true, 00:05:49.863 "unmap": true, 00:05:49.863 "flush": true, 00:05:49.863 "reset": true, 00:05:49.863 "nvme_admin": false, 00:05:49.863 "nvme_io": false, 00:05:49.863 "nvme_io_md": false, 00:05:49.863 "write_zeroes": true, 00:05:49.863 "zcopy": true, 00:05:49.863 "get_zone_info": false, 00:05:49.863 "zone_management": false, 00:05:49.863 "zone_append": false, 00:05:49.863 "compare": false, 00:05:49.863 "compare_and_write": false, 00:05:49.863 "abort": true, 00:05:49.863 "seek_hole": false, 00:05:49.863 "seek_data": false, 00:05:49.863 "copy": true, 00:05:49.863 "nvme_iov_md": false 00:05:49.863 }, 00:05:49.863 "memory_domains": [ 00:05:49.863 { 00:05:49.863 "dma_device_id": "system", 00:05:49.863 "dma_device_type": 1 00:05:49.863 }, 00:05:49.863 { 00:05:49.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:49.863 "dma_device_type": 2 00:05:49.863 } 00:05:49.863 ], 00:05:49.863 "driver_specific": {} 00:05:49.863 } 00:05:49.863 ]' 00:05:49.863 14:05:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:50.123 14:05:27 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:50.123 14:05:27 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:50.123 14:05:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.123 14:05:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:50.123 14:05:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.123 14:05:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:50.123 14:05:27 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.123 14:05:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:50.123 14:05:27 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.123 14:05:27 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:50.123 14:05:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:50.123 14:05:27 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:50.123 00:05:50.123 real 0m0.180s 00:05:50.123 user 0m0.119s 00:05:50.123 sys 0m0.016s 00:05:50.123 14:05:27 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.123 14:05:27 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:50.123 ************************************ 00:05:50.123 END TEST rpc_plugins 00:05:50.123 ************************************ 00:05:50.123 14:05:27 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:50.123 14:05:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.123 14:05:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.123 14:05:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.123 ************************************ 00:05:50.123 START TEST rpc_trace_cmd_test 00:05:50.123 ************************************ 00:05:50.123 14:05:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:50.123 14:05:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:50.123 14:05:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:50.123 14:05:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.123 14:05:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:50.123 14:05:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.123 14:05:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:50.123 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid56718", 00:05:50.123 "tpoint_group_mask": "0x8", 00:05:50.123 "iscsi_conn": { 00:05:50.123 "mask": "0x2", 00:05:50.123 "tpoint_mask": "0x0" 00:05:50.123 }, 00:05:50.123 "scsi": { 00:05:50.123 "mask": "0x4", 00:05:50.123 "tpoint_mask": "0x0" 00:05:50.123 }, 00:05:50.123 "bdev": { 00:05:50.123 "mask": "0x8", 00:05:50.123 "tpoint_mask": "0xffffffffffffffff" 00:05:50.123 }, 00:05:50.123 "nvmf_rdma": { 00:05:50.123 "mask": "0x10", 00:05:50.123 "tpoint_mask": "0x0" 00:05:50.123 }, 00:05:50.123 "nvmf_tcp": { 00:05:50.123 "mask": "0x20", 00:05:50.123 "tpoint_mask": "0x0" 00:05:50.123 }, 00:05:50.123 "ftl": { 00:05:50.123 "mask": "0x40", 00:05:50.123 "tpoint_mask": "0x0" 00:05:50.123 }, 00:05:50.123 "blobfs": { 00:05:50.123 "mask": "0x80", 00:05:50.123 "tpoint_mask": "0x0" 00:05:50.123 }, 00:05:50.123 "dsa": { 00:05:50.123 "mask": "0x200", 00:05:50.123 "tpoint_mask": "0x0" 00:05:50.123 }, 00:05:50.123 "thread": { 00:05:50.123 "mask": "0x400", 00:05:50.123 "tpoint_mask": "0x0" 00:05:50.123 }, 00:05:50.123 "nvme_pcie": { 00:05:50.123 "mask": "0x800", 00:05:50.123 "tpoint_mask": "0x0" 00:05:50.123 }, 00:05:50.123 "iaa": { 00:05:50.123 "mask": "0x1000", 00:05:50.123 "tpoint_mask": "0x0" 00:05:50.123 }, 00:05:50.123 "nvme_tcp": { 00:05:50.123 "mask": "0x2000", 00:05:50.123 "tpoint_mask": "0x0" 00:05:50.123 }, 00:05:50.123 "bdev_nvme": { 00:05:50.123 "mask": "0x4000", 00:05:50.123 "tpoint_mask": "0x0" 00:05:50.123 }, 00:05:50.123 "sock": { 00:05:50.123 "mask": "0x8000", 00:05:50.123 "tpoint_mask": "0x0" 00:05:50.123 }, 00:05:50.123 "blob": { 00:05:50.123 "mask": "0x10000", 00:05:50.123 "tpoint_mask": "0x0" 00:05:50.123 }, 00:05:50.123 "bdev_raid": { 00:05:50.123 "mask": "0x20000", 00:05:50.123 "tpoint_mask": "0x0" 00:05:50.123 }, 00:05:50.123 "scheduler": { 00:05:50.123 "mask": "0x40000", 00:05:50.123 "tpoint_mask": "0x0" 00:05:50.123 } 00:05:50.123 }' 00:05:50.123 14:05:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:50.123 14:05:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:50.123 14:05:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:50.382 14:05:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:50.382 14:05:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:50.382 14:05:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:50.383 14:05:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:50.383 14:05:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:50.383 14:05:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:50.383 14:05:27 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:50.383 00:05:50.383 real 0m0.304s 00:05:50.383 user 0m0.255s 00:05:50.383 sys 0m0.040s 00:05:50.383 14:05:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.383 14:05:27 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:50.383 ************************************ 00:05:50.383 END TEST rpc_trace_cmd_test 00:05:50.383 ************************************ 00:05:50.383 14:05:27 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:50.383 14:05:27 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:50.383 14:05:27 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:50.383 14:05:27 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:50.383 14:05:27 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:50.383 14:05:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.383 ************************************ 00:05:50.383 START TEST rpc_daemon_integrity 00:05:50.383 ************************************ 00:05:50.383 14:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:50.383 14:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:50.383 14:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.383 14:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.383 14:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.383 14:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:50.383 14:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:50.641 14:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:50.641 14:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:50.641 14:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.641 14:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.641 14:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.641 14:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:50.641 14:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:50.641 14:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.641 14:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.641 14:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.641 14:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:50.641 { 00:05:50.641 "name": "Malloc2", 00:05:50.641 "aliases": [ 00:05:50.641 "610413b1-25f4-456e-b9a3-d5419dc754b0" 00:05:50.641 ], 00:05:50.641 "product_name": "Malloc disk", 00:05:50.641 "block_size": 512, 00:05:50.641 "num_blocks": 16384, 00:05:50.641 "uuid": "610413b1-25f4-456e-b9a3-d5419dc754b0", 00:05:50.641 "assigned_rate_limits": { 00:05:50.641 "rw_ios_per_sec": 0, 00:05:50.641 "rw_mbytes_per_sec": 0, 00:05:50.641 "r_mbytes_per_sec": 0, 00:05:50.641 "w_mbytes_per_sec": 0 00:05:50.641 }, 00:05:50.641 "claimed": false, 00:05:50.641 "zoned": false, 00:05:50.641 "supported_io_types": { 00:05:50.641 "read": true, 00:05:50.641 "write": true, 00:05:50.641 "unmap": true, 00:05:50.641 "flush": true, 00:05:50.641 "reset": true, 00:05:50.641 "nvme_admin": false, 00:05:50.641 "nvme_io": false, 00:05:50.641 "nvme_io_md": false, 00:05:50.641 "write_zeroes": true, 00:05:50.641 "zcopy": true, 00:05:50.641 "get_zone_info": false, 00:05:50.641 "zone_management": false, 00:05:50.641 "zone_append": false, 00:05:50.641 "compare": false, 00:05:50.641 "compare_and_write": false, 00:05:50.641 "abort": true, 00:05:50.641 "seek_hole": false, 00:05:50.641 "seek_data": false, 00:05:50.641 "copy": true, 00:05:50.641 "nvme_iov_md": false 00:05:50.641 }, 00:05:50.641 "memory_domains": [ 00:05:50.641 { 00:05:50.641 "dma_device_id": "system", 00:05:50.641 "dma_device_type": 1 00:05:50.641 }, 00:05:50.641 { 00:05:50.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:50.641 "dma_device_type": 2 00:05:50.641 } 00:05:50.641 ], 00:05:50.641 "driver_specific": {} 00:05:50.641 } 00:05:50.641 ]' 00:05:50.641 14:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:50.641 14:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:50.641 14:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:50.641 14:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.641 14:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.641 [2024-11-27 14:05:27.785266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:50.641 [2024-11-27 14:05:27.785346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:50.641 [2024-11-27 14:05:27.785380] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:05:50.641 [2024-11-27 14:05:27.785398] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:50.641 Passthru0 00:05:50.641 [2024-11-27 14:05:27.788435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:50.641 [2024-11-27 14:05:27.788484] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:50.641 14:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.641 14:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:50.641 14:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.641 14:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.641 14:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.641 14:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:50.641 { 00:05:50.641 "name": "Malloc2", 00:05:50.641 "aliases": [ 00:05:50.641 "610413b1-25f4-456e-b9a3-d5419dc754b0" 00:05:50.641 ], 00:05:50.641 "product_name": "Malloc disk", 00:05:50.641 "block_size": 512, 00:05:50.641 "num_blocks": 16384, 00:05:50.641 "uuid": "610413b1-25f4-456e-b9a3-d5419dc754b0", 00:05:50.641 "assigned_rate_limits": { 00:05:50.641 "rw_ios_per_sec": 0, 00:05:50.641 "rw_mbytes_per_sec": 0, 00:05:50.641 "r_mbytes_per_sec": 0, 00:05:50.641 "w_mbytes_per_sec": 0 00:05:50.641 }, 00:05:50.641 "claimed": true, 00:05:50.641 "claim_type": "exclusive_write", 00:05:50.641 "zoned": false, 00:05:50.641 "supported_io_types": { 00:05:50.641 "read": true, 00:05:50.641 "write": true, 00:05:50.641 "unmap": true, 00:05:50.641 "flush": true, 00:05:50.641 "reset": true, 00:05:50.641 "nvme_admin": false, 00:05:50.641 "nvme_io": false, 00:05:50.641 "nvme_io_md": false, 00:05:50.641 "write_zeroes": true, 00:05:50.641 "zcopy": true, 00:05:50.641 "get_zone_info": false, 00:05:50.641 "zone_management": false, 00:05:50.641 "zone_append": false, 00:05:50.641 "compare": false, 00:05:50.641 "compare_and_write": false, 00:05:50.641 "abort": true, 00:05:50.641 "seek_hole": false, 00:05:50.641 "seek_data": false, 00:05:50.641 "copy": true, 00:05:50.641 "nvme_iov_md": false 00:05:50.641 }, 00:05:50.641 "memory_domains": [ 00:05:50.641 { 00:05:50.641 "dma_device_id": "system", 00:05:50.641 "dma_device_type": 1 00:05:50.641 }, 00:05:50.641 { 00:05:50.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:50.641 "dma_device_type": 2 00:05:50.642 } 00:05:50.642 ], 00:05:50.642 "driver_specific": {} 00:05:50.642 }, 00:05:50.642 { 00:05:50.642 "name": "Passthru0", 00:05:50.642 "aliases": [ 00:05:50.642 "996ea047-82e7-5f34-9ad9-f95c342e46a7" 00:05:50.642 ], 00:05:50.642 "product_name": "passthru", 00:05:50.642 "block_size": 512, 00:05:50.642 "num_blocks": 16384, 00:05:50.642 "uuid": "996ea047-82e7-5f34-9ad9-f95c342e46a7", 00:05:50.642 "assigned_rate_limits": { 00:05:50.642 "rw_ios_per_sec": 0, 00:05:50.642 "rw_mbytes_per_sec": 0, 00:05:50.642 "r_mbytes_per_sec": 0, 00:05:50.642 "w_mbytes_per_sec": 0 00:05:50.642 }, 00:05:50.642 "claimed": false, 00:05:50.642 "zoned": false, 00:05:50.642 "supported_io_types": { 00:05:50.642 "read": true, 00:05:50.642 "write": true, 00:05:50.642 "unmap": true, 00:05:50.642 "flush": true, 00:05:50.642 "reset": true, 00:05:50.642 "nvme_admin": false, 00:05:50.642 "nvme_io": false, 00:05:50.642 "nvme_io_md": false, 00:05:50.642 "write_zeroes": true, 00:05:50.642 "zcopy": true, 00:05:50.642 "get_zone_info": false, 00:05:50.642 "zone_management": false, 00:05:50.642 "zone_append": false, 00:05:50.642 "compare": false, 00:05:50.642 "compare_and_write": false, 00:05:50.642 "abort": true, 00:05:50.642 "seek_hole": false, 00:05:50.642 "seek_data": false, 00:05:50.642 "copy": true, 00:05:50.642 "nvme_iov_md": false 00:05:50.642 }, 00:05:50.642 "memory_domains": [ 00:05:50.642 { 00:05:50.642 "dma_device_id": "system", 00:05:50.642 "dma_device_type": 1 00:05:50.642 }, 00:05:50.642 { 00:05:50.642 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:50.642 "dma_device_type": 2 00:05:50.642 } 00:05:50.642 ], 00:05:50.642 "driver_specific": { 00:05:50.642 "passthru": { 00:05:50.642 "name": "Passthru0", 00:05:50.642 "base_bdev_name": "Malloc2" 00:05:50.642 } 00:05:50.642 } 00:05:50.642 } 00:05:50.642 ]' 00:05:50.642 14:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:50.642 14:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:50.642 14:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:50.642 14:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.642 14:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.642 14:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.642 14:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:50.642 14:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.642 14:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.642 14:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.642 14:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:50.642 14:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.642 14:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.642 14:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.642 14:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:50.642 14:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:50.900 14:05:27 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:50.900 00:05:50.900 real 0m0.337s 00:05:50.900 user 0m0.197s 00:05:50.900 sys 0m0.042s 00:05:50.900 14:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:50.900 14:05:27 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:50.900 ************************************ 00:05:50.900 END TEST rpc_daemon_integrity 00:05:50.900 ************************************ 00:05:50.900 14:05:27 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:50.900 14:05:27 rpc -- rpc/rpc.sh@84 -- # killprocess 56718 00:05:50.900 14:05:27 rpc -- common/autotest_common.sh@954 -- # '[' -z 56718 ']' 00:05:50.900 14:05:27 rpc -- common/autotest_common.sh@958 -- # kill -0 56718 00:05:50.900 14:05:27 rpc -- common/autotest_common.sh@959 -- # uname 00:05:50.900 14:05:28 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.900 14:05:28 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56718 00:05:50.900 14:05:28 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.900 14:05:28 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.900 killing process with pid 56718 00:05:50.900 14:05:28 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56718' 00:05:50.900 14:05:28 rpc -- common/autotest_common.sh@973 -- # kill 56718 00:05:50.900 14:05:28 rpc -- common/autotest_common.sh@978 -- # wait 56718 00:05:53.429 00:05:53.429 real 0m5.222s 00:05:53.429 user 0m6.003s 00:05:53.429 sys 0m0.912s 00:05:53.429 14:05:30 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.429 14:05:30 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.429 ************************************ 00:05:53.429 END TEST rpc 00:05:53.429 ************************************ 00:05:53.429 14:05:30 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:53.429 14:05:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.429 14:05:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.429 14:05:30 -- common/autotest_common.sh@10 -- # set +x 00:05:53.429 ************************************ 00:05:53.429 START TEST skip_rpc 00:05:53.429 ************************************ 00:05:53.429 14:05:30 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:05:53.429 * Looking for test storage... 00:05:53.429 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:05:53.429 14:05:30 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:53.429 14:05:30 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:05:53.429 14:05:30 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:53.429 14:05:30 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:53.429 14:05:30 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:53.429 14:05:30 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:53.429 14:05:30 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:53.429 14:05:30 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:53.429 14:05:30 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:53.429 14:05:30 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:53.429 14:05:30 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:53.429 14:05:30 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:53.429 14:05:30 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:53.429 14:05:30 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:53.429 14:05:30 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:53.429 14:05:30 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:53.429 14:05:30 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:53.429 14:05:30 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:53.429 14:05:30 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:53.429 14:05:30 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:53.429 14:05:30 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:53.429 14:05:30 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:53.429 14:05:30 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:53.429 14:05:30 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:53.429 14:05:30 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:53.429 14:05:30 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:53.429 14:05:30 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:53.429 14:05:30 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:53.429 14:05:30 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:53.429 14:05:30 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:53.429 14:05:30 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:53.429 14:05:30 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:53.429 14:05:30 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:53.429 14:05:30 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:53.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.429 --rc genhtml_branch_coverage=1 00:05:53.429 --rc genhtml_function_coverage=1 00:05:53.429 --rc genhtml_legend=1 00:05:53.429 --rc geninfo_all_blocks=1 00:05:53.429 --rc geninfo_unexecuted_blocks=1 00:05:53.429 00:05:53.429 ' 00:05:53.429 14:05:30 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:53.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.429 --rc genhtml_branch_coverage=1 00:05:53.429 --rc genhtml_function_coverage=1 00:05:53.429 --rc genhtml_legend=1 00:05:53.429 --rc geninfo_all_blocks=1 00:05:53.429 --rc geninfo_unexecuted_blocks=1 00:05:53.429 00:05:53.429 ' 00:05:53.429 14:05:30 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:53.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.429 --rc genhtml_branch_coverage=1 00:05:53.429 --rc genhtml_function_coverage=1 00:05:53.429 --rc genhtml_legend=1 00:05:53.429 --rc geninfo_all_blocks=1 00:05:53.429 --rc geninfo_unexecuted_blocks=1 00:05:53.429 00:05:53.429 ' 00:05:53.429 14:05:30 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:53.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:53.429 --rc genhtml_branch_coverage=1 00:05:53.429 --rc genhtml_function_coverage=1 00:05:53.429 --rc genhtml_legend=1 00:05:53.429 --rc geninfo_all_blocks=1 00:05:53.429 --rc geninfo_unexecuted_blocks=1 00:05:53.429 00:05:53.429 ' 00:05:53.429 14:05:30 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:05:53.429 14:05:30 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:05:53.429 14:05:30 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:53.429 14:05:30 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.429 14:05:30 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.429 14:05:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:53.429 ************************************ 00:05:53.429 START TEST skip_rpc 00:05:53.429 ************************************ 00:05:53.429 14:05:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:53.429 14:05:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=56948 00:05:53.429 14:05:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:53.429 14:05:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.429 14:05:30 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:53.429 [2024-11-27 14:05:30.659129] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:05:53.429 [2024-11-27 14:05:30.659287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid56948 ] 00:05:53.688 [2024-11-27 14:05:30.834852] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:53.945 [2024-11-27 14:05:30.981508] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.205 14:05:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:59.205 14:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:59.205 14:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:59.205 14:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:59.205 14:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.205 14:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:59.205 14:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:59.205 14:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:59.205 14:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:59.205 14:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:59.205 14:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:59.205 14:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:59.205 14:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:59.205 14:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:59.205 14:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:59.206 14:05:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:59.206 14:05:35 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 56948 00:05:59.206 14:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 56948 ']' 00:05:59.206 14:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 56948 00:05:59.206 14:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:59.206 14:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:59.206 14:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 56948 00:05:59.206 14:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:59.206 killing process with pid 56948 00:05:59.206 14:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:59.206 14:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 56948' 00:05:59.206 14:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 56948 00:05:59.206 14:05:35 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 56948 00:06:00.581 00:06:00.581 real 0m7.251s 00:06:00.581 user 0m6.665s 00:06:00.581 sys 0m0.456s 00:06:00.581 14:05:37 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:00.581 14:05:37 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.581 ************************************ 00:06:00.581 END TEST skip_rpc 00:06:00.581 ************************************ 00:06:00.581 14:05:37 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:00.581 14:05:37 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:00.581 14:05:37 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:00.581 14:05:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:00.581 ************************************ 00:06:00.581 START TEST skip_rpc_with_json 00:06:00.581 ************************************ 00:06:00.581 14:05:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:00.581 14:05:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:00.581 14:05:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=57052 00:06:00.581 14:05:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:00.581 14:05:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:00.581 14:05:37 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 57052 00:06:00.581 14:05:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 57052 ']' 00:06:00.581 14:05:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:00.581 14:05:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:00.581 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:00.581 14:05:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:00.581 14:05:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:00.581 14:05:37 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:00.839 [2024-11-27 14:05:37.925671] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:06:00.839 [2024-11-27 14:05:37.925845] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57052 ] 00:06:00.839 [2024-11-27 14:05:38.099756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:01.097 [2024-11-27 14:05:38.231427] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.031 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:02.031 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:02.031 14:05:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:02.031 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.031 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:02.031 [2024-11-27 14:05:39.103870] nvmf_rpc.c:2706:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:02.031 request: 00:06:02.031 { 00:06:02.031 "trtype": "tcp", 00:06:02.031 "method": "nvmf_get_transports", 00:06:02.031 "req_id": 1 00:06:02.031 } 00:06:02.031 Got JSON-RPC error response 00:06:02.031 response: 00:06:02.031 { 00:06:02.031 "code": -19, 00:06:02.031 "message": "No such device" 00:06:02.031 } 00:06:02.031 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:02.031 14:05:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:02.031 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.031 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:02.031 [2024-11-27 14:05:39.116076] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:02.031 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.031 14:05:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:02.031 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:02.031 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:02.031 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:02.031 14:05:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:02.031 { 00:06:02.031 "subsystems": [ 00:06:02.031 { 00:06:02.031 "subsystem": "fsdev", 00:06:02.031 "config": [ 00:06:02.031 { 00:06:02.031 "method": "fsdev_set_opts", 00:06:02.031 "params": { 00:06:02.032 "fsdev_io_pool_size": 65535, 00:06:02.032 "fsdev_io_cache_size": 256 00:06:02.032 } 00:06:02.032 } 00:06:02.032 ] 00:06:02.032 }, 00:06:02.032 { 00:06:02.032 "subsystem": "keyring", 00:06:02.032 "config": [] 00:06:02.032 }, 00:06:02.032 { 00:06:02.032 "subsystem": "iobuf", 00:06:02.032 "config": [ 00:06:02.032 { 00:06:02.032 "method": "iobuf_set_options", 00:06:02.032 "params": { 00:06:02.032 "small_pool_count": 8192, 00:06:02.032 "large_pool_count": 1024, 00:06:02.032 "small_bufsize": 8192, 00:06:02.032 "large_bufsize": 135168, 00:06:02.032 "enable_numa": false 00:06:02.032 } 00:06:02.032 } 00:06:02.032 ] 00:06:02.032 }, 00:06:02.032 { 00:06:02.032 "subsystem": "sock", 00:06:02.032 "config": [ 00:06:02.032 { 00:06:02.032 "method": "sock_set_default_impl", 00:06:02.032 "params": { 00:06:02.032 "impl_name": "posix" 00:06:02.032 } 00:06:02.032 }, 00:06:02.032 { 00:06:02.032 "method": "sock_impl_set_options", 00:06:02.032 "params": { 00:06:02.032 "impl_name": "ssl", 00:06:02.032 "recv_buf_size": 4096, 00:06:02.032 "send_buf_size": 4096, 00:06:02.032 "enable_recv_pipe": true, 00:06:02.032 "enable_quickack": false, 00:06:02.032 "enable_placement_id": 0, 00:06:02.032 "enable_zerocopy_send_server": true, 00:06:02.032 "enable_zerocopy_send_client": false, 00:06:02.032 "zerocopy_threshold": 0, 00:06:02.032 "tls_version": 0, 00:06:02.032 "enable_ktls": false 00:06:02.032 } 00:06:02.032 }, 00:06:02.032 { 00:06:02.032 "method": "sock_impl_set_options", 00:06:02.032 "params": { 00:06:02.032 "impl_name": "posix", 00:06:02.032 "recv_buf_size": 2097152, 00:06:02.032 "send_buf_size": 2097152, 00:06:02.032 "enable_recv_pipe": true, 00:06:02.032 "enable_quickack": false, 00:06:02.032 "enable_placement_id": 0, 00:06:02.032 "enable_zerocopy_send_server": true, 00:06:02.032 "enable_zerocopy_send_client": false, 00:06:02.032 "zerocopy_threshold": 0, 00:06:02.032 "tls_version": 0, 00:06:02.032 "enable_ktls": false 00:06:02.032 } 00:06:02.032 } 00:06:02.032 ] 00:06:02.032 }, 00:06:02.032 { 00:06:02.032 "subsystem": "vmd", 00:06:02.032 "config": [] 00:06:02.032 }, 00:06:02.032 { 00:06:02.032 "subsystem": "accel", 00:06:02.032 "config": [ 00:06:02.032 { 00:06:02.032 "method": "accel_set_options", 00:06:02.032 "params": { 00:06:02.032 "small_cache_size": 128, 00:06:02.032 "large_cache_size": 16, 00:06:02.032 "task_count": 2048, 00:06:02.032 "sequence_count": 2048, 00:06:02.032 "buf_count": 2048 00:06:02.032 } 00:06:02.032 } 00:06:02.032 ] 00:06:02.032 }, 00:06:02.032 { 00:06:02.032 "subsystem": "bdev", 00:06:02.032 "config": [ 00:06:02.032 { 00:06:02.032 "method": "bdev_set_options", 00:06:02.032 "params": { 00:06:02.032 "bdev_io_pool_size": 65535, 00:06:02.032 "bdev_io_cache_size": 256, 00:06:02.032 "bdev_auto_examine": true, 00:06:02.032 "iobuf_small_cache_size": 128, 00:06:02.032 "iobuf_large_cache_size": 16 00:06:02.032 } 00:06:02.032 }, 00:06:02.032 { 00:06:02.032 "method": "bdev_raid_set_options", 00:06:02.032 "params": { 00:06:02.032 "process_window_size_kb": 1024, 00:06:02.032 "process_max_bandwidth_mb_sec": 0 00:06:02.032 } 00:06:02.032 }, 00:06:02.032 { 00:06:02.032 "method": "bdev_iscsi_set_options", 00:06:02.032 "params": { 00:06:02.032 "timeout_sec": 30 00:06:02.032 } 00:06:02.032 }, 00:06:02.032 { 00:06:02.032 "method": "bdev_nvme_set_options", 00:06:02.032 "params": { 00:06:02.032 "action_on_timeout": "none", 00:06:02.032 "timeout_us": 0, 00:06:02.032 "timeout_admin_us": 0, 00:06:02.032 "keep_alive_timeout_ms": 10000, 00:06:02.032 "arbitration_burst": 0, 00:06:02.032 "low_priority_weight": 0, 00:06:02.032 "medium_priority_weight": 0, 00:06:02.032 "high_priority_weight": 0, 00:06:02.032 "nvme_adminq_poll_period_us": 10000, 00:06:02.032 "nvme_ioq_poll_period_us": 0, 00:06:02.032 "io_queue_requests": 0, 00:06:02.032 "delay_cmd_submit": true, 00:06:02.032 "transport_retry_count": 4, 00:06:02.032 "bdev_retry_count": 3, 00:06:02.032 "transport_ack_timeout": 0, 00:06:02.032 "ctrlr_loss_timeout_sec": 0, 00:06:02.032 "reconnect_delay_sec": 0, 00:06:02.032 "fast_io_fail_timeout_sec": 0, 00:06:02.032 "disable_auto_failback": false, 00:06:02.032 "generate_uuids": false, 00:06:02.032 "transport_tos": 0, 00:06:02.032 "nvme_error_stat": false, 00:06:02.032 "rdma_srq_size": 0, 00:06:02.032 "io_path_stat": false, 00:06:02.032 "allow_accel_sequence": false, 00:06:02.032 "rdma_max_cq_size": 0, 00:06:02.032 "rdma_cm_event_timeout_ms": 0, 00:06:02.032 "dhchap_digests": [ 00:06:02.032 "sha256", 00:06:02.032 "sha384", 00:06:02.032 "sha512" 00:06:02.032 ], 00:06:02.032 "dhchap_dhgroups": [ 00:06:02.032 "null", 00:06:02.032 "ffdhe2048", 00:06:02.032 "ffdhe3072", 00:06:02.032 "ffdhe4096", 00:06:02.032 "ffdhe6144", 00:06:02.032 "ffdhe8192" 00:06:02.032 ] 00:06:02.032 } 00:06:02.032 }, 00:06:02.032 { 00:06:02.032 "method": "bdev_nvme_set_hotplug", 00:06:02.032 "params": { 00:06:02.032 "period_us": 100000, 00:06:02.032 "enable": false 00:06:02.032 } 00:06:02.032 }, 00:06:02.032 { 00:06:02.032 "method": "bdev_wait_for_examine" 00:06:02.032 } 00:06:02.032 ] 00:06:02.032 }, 00:06:02.032 { 00:06:02.032 "subsystem": "scsi", 00:06:02.032 "config": null 00:06:02.032 }, 00:06:02.032 { 00:06:02.032 "subsystem": "scheduler", 00:06:02.032 "config": [ 00:06:02.032 { 00:06:02.032 "method": "framework_set_scheduler", 00:06:02.032 "params": { 00:06:02.032 "name": "static" 00:06:02.032 } 00:06:02.032 } 00:06:02.032 ] 00:06:02.032 }, 00:06:02.032 { 00:06:02.032 "subsystem": "vhost_scsi", 00:06:02.032 "config": [] 00:06:02.032 }, 00:06:02.032 { 00:06:02.032 "subsystem": "vhost_blk", 00:06:02.032 "config": [] 00:06:02.032 }, 00:06:02.032 { 00:06:02.032 "subsystem": "ublk", 00:06:02.032 "config": [] 00:06:02.032 }, 00:06:02.032 { 00:06:02.032 "subsystem": "nbd", 00:06:02.032 "config": [] 00:06:02.032 }, 00:06:02.032 { 00:06:02.032 "subsystem": "nvmf", 00:06:02.032 "config": [ 00:06:02.032 { 00:06:02.032 "method": "nvmf_set_config", 00:06:02.032 "params": { 00:06:02.032 "discovery_filter": "match_any", 00:06:02.032 "admin_cmd_passthru": { 00:06:02.032 "identify_ctrlr": false 00:06:02.032 }, 00:06:02.032 "dhchap_digests": [ 00:06:02.032 "sha256", 00:06:02.032 "sha384", 00:06:02.032 "sha512" 00:06:02.032 ], 00:06:02.032 "dhchap_dhgroups": [ 00:06:02.032 "null", 00:06:02.032 "ffdhe2048", 00:06:02.032 "ffdhe3072", 00:06:02.032 "ffdhe4096", 00:06:02.032 "ffdhe6144", 00:06:02.032 "ffdhe8192" 00:06:02.032 ] 00:06:02.032 } 00:06:02.032 }, 00:06:02.032 { 00:06:02.032 "method": "nvmf_set_max_subsystems", 00:06:02.032 "params": { 00:06:02.032 "max_subsystems": 1024 00:06:02.032 } 00:06:02.032 }, 00:06:02.032 { 00:06:02.032 "method": "nvmf_set_crdt", 00:06:02.032 "params": { 00:06:02.032 "crdt1": 0, 00:06:02.032 "crdt2": 0, 00:06:02.032 "crdt3": 0 00:06:02.032 } 00:06:02.032 }, 00:06:02.032 { 00:06:02.032 "method": "nvmf_create_transport", 00:06:02.032 "params": { 00:06:02.032 "trtype": "TCP", 00:06:02.032 "max_queue_depth": 128, 00:06:02.032 "max_io_qpairs_per_ctrlr": 127, 00:06:02.032 "in_capsule_data_size": 4096, 00:06:02.032 "max_io_size": 131072, 00:06:02.032 "io_unit_size": 131072, 00:06:02.032 "max_aq_depth": 128, 00:06:02.032 "num_shared_buffers": 511, 00:06:02.032 "buf_cache_size": 4294967295, 00:06:02.032 "dif_insert_or_strip": false, 00:06:02.032 "zcopy": false, 00:06:02.032 "c2h_success": true, 00:06:02.032 "sock_priority": 0, 00:06:02.032 "abort_timeout_sec": 1, 00:06:02.032 "ack_timeout": 0, 00:06:02.032 "data_wr_pool_size": 0 00:06:02.032 } 00:06:02.032 } 00:06:02.032 ] 00:06:02.032 }, 00:06:02.032 { 00:06:02.032 "subsystem": "iscsi", 00:06:02.032 "config": [ 00:06:02.032 { 00:06:02.032 "method": "iscsi_set_options", 00:06:02.032 "params": { 00:06:02.032 "node_base": "iqn.2016-06.io.spdk", 00:06:02.032 "max_sessions": 128, 00:06:02.032 "max_connections_per_session": 2, 00:06:02.032 "max_queue_depth": 64, 00:06:02.032 "default_time2wait": 2, 00:06:02.032 "default_time2retain": 20, 00:06:02.032 "first_burst_length": 8192, 00:06:02.032 "immediate_data": true, 00:06:02.032 "allow_duplicated_isid": false, 00:06:02.032 "error_recovery_level": 0, 00:06:02.032 "nop_timeout": 60, 00:06:02.032 "nop_in_interval": 30, 00:06:02.032 "disable_chap": false, 00:06:02.032 "require_chap": false, 00:06:02.032 "mutual_chap": false, 00:06:02.032 "chap_group": 0, 00:06:02.032 "max_large_datain_per_connection": 64, 00:06:02.032 "max_r2t_per_connection": 4, 00:06:02.032 "pdu_pool_size": 36864, 00:06:02.032 "immediate_data_pool_size": 16384, 00:06:02.032 "data_out_pool_size": 2048 00:06:02.032 } 00:06:02.032 } 00:06:02.032 ] 00:06:02.032 } 00:06:02.032 ] 00:06:02.032 } 00:06:02.032 14:05:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:02.033 14:05:39 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 57052 00:06:02.033 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57052 ']' 00:06:02.033 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57052 00:06:02.033 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:02.033 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:02.033 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57052 00:06:02.291 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:02.291 killing process with pid 57052 00:06:02.291 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:02.291 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57052' 00:06:02.291 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57052 00:06:02.291 14:05:39 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57052 00:06:04.824 14:05:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=57103 00:06:04.824 14:05:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:04.824 14:05:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:10.142 14:05:46 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 57103 00:06:10.142 14:05:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 57103 ']' 00:06:10.142 14:05:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 57103 00:06:10.142 14:05:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:10.142 14:05:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.142 14:05:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57103 00:06:10.142 14:05:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.142 killing process with pid 57103 00:06:10.142 14:05:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.142 14:05:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57103' 00:06:10.142 14:05:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 57103 00:06:10.142 14:05:46 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 57103 00:06:12.044 14:05:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:12.044 14:05:48 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:06:12.044 00:06:12.044 real 0m11.045s 00:06:12.044 user 0m10.385s 00:06:12.044 sys 0m1.028s 00:06:12.044 14:05:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.044 14:05:48 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:12.045 ************************************ 00:06:12.045 END TEST skip_rpc_with_json 00:06:12.045 ************************************ 00:06:12.045 14:05:48 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:12.045 14:05:48 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.045 14:05:48 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.045 14:05:48 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.045 ************************************ 00:06:12.045 START TEST skip_rpc_with_delay 00:06:12.045 ************************************ 00:06:12.045 14:05:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:12.045 14:05:48 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:12.045 14:05:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:12.045 14:05:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:12.045 14:05:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:12.045 14:05:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.045 14:05:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:12.045 14:05:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.045 14:05:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:12.045 14:05:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:12.045 14:05:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:12.045 14:05:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:12.045 14:05:48 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:12.045 [2024-11-27 14:05:48.985330] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:12.045 14:05:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:12.045 14:05:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:12.045 14:05:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:12.045 14:05:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:12.045 00:06:12.045 real 0m0.168s 00:06:12.045 user 0m0.098s 00:06:12.045 sys 0m0.068s 00:06:12.045 14:05:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.045 14:05:49 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:12.045 ************************************ 00:06:12.045 END TEST skip_rpc_with_delay 00:06:12.045 ************************************ 00:06:12.045 14:05:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:12.045 14:05:49 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:12.045 14:05:49 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:12.045 14:05:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.045 14:05:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.045 14:05:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.045 ************************************ 00:06:12.045 START TEST exit_on_failed_rpc_init 00:06:12.045 ************************************ 00:06:12.045 14:05:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:12.045 14:05:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=57236 00:06:12.045 14:05:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.045 14:05:49 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 57236 00:06:12.045 14:05:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 57236 ']' 00:06:12.045 14:05:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.045 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.045 14:05:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.045 14:05:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.045 14:05:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.045 14:05:49 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:12.045 [2024-11-27 14:05:49.250498] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:06:12.045 [2024-11-27 14:05:49.250721] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57236 ] 00:06:12.305 [2024-11-27 14:05:49.446199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.563 [2024-11-27 14:05:49.598309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.498 14:05:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.498 14:05:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:13.499 14:05:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:13.499 14:05:50 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:13.499 14:05:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:13.499 14:05:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:13.499 14:05:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:13.499 14:05:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.499 14:05:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:13.499 14:05:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.499 14:05:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:13.499 14:05:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:13.499 14:05:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:13.499 14:05:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:06:13.499 14:05:50 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:06:13.499 [2024-11-27 14:05:50.737224] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:06:13.499 [2024-11-27 14:05:50.737462] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57261 ] 00:06:13.757 [2024-11-27 14:05:50.931705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.015 [2024-11-27 14:05:51.077070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:14.015 [2024-11-27 14:05:51.077189] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:14.015 [2024-11-27 14:05:51.077213] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:14.015 [2024-11-27 14:05:51.077236] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:14.274 14:05:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:14.274 14:05:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:14.274 14:05:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:14.274 14:05:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:14.274 14:05:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:14.274 14:05:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:14.274 14:05:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:14.274 14:05:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 57236 00:06:14.274 14:05:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 57236 ']' 00:06:14.274 14:05:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 57236 00:06:14.274 14:05:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:14.274 14:05:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.274 14:05:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57236 00:06:14.274 14:05:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.274 killing process with pid 57236 00:06:14.274 14:05:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.274 14:05:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57236' 00:06:14.274 14:05:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 57236 00:06:14.274 14:05:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 57236 00:06:16.814 00:06:16.814 real 0m4.596s 00:06:16.814 user 0m5.014s 00:06:16.814 sys 0m0.765s 00:06:16.814 14:05:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.814 14:05:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:16.814 ************************************ 00:06:16.814 END TEST exit_on_failed_rpc_init 00:06:16.814 ************************************ 00:06:16.814 14:05:53 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:06:16.814 00:06:16.814 real 0m23.431s 00:06:16.814 user 0m22.353s 00:06:16.814 sys 0m2.492s 00:06:16.814 14:05:53 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.814 ************************************ 00:06:16.814 END TEST skip_rpc 00:06:16.814 ************************************ 00:06:16.814 14:05:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:16.814 14:05:53 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:16.814 14:05:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.814 14:05:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.814 14:05:53 -- common/autotest_common.sh@10 -- # set +x 00:06:16.814 ************************************ 00:06:16.814 START TEST rpc_client 00:06:16.814 ************************************ 00:06:16.814 14:05:53 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:06:16.814 * Looking for test storage... 00:06:16.814 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:06:16.814 14:05:53 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:16.814 14:05:53 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:16.814 14:05:53 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:16.814 14:05:53 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:16.814 14:05:53 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:16.814 14:05:53 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:16.814 14:05:53 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:16.814 14:05:53 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:16.814 14:05:53 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:16.814 14:05:53 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:16.814 14:05:53 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:16.814 14:05:53 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:16.814 14:05:53 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:16.814 14:05:53 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:16.814 14:05:53 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:16.814 14:05:53 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:16.814 14:05:53 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:16.814 14:05:53 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:16.814 14:05:53 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:16.814 14:05:53 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:16.814 14:05:53 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:16.814 14:05:53 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:16.814 14:05:53 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:16.814 14:05:53 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:16.814 14:05:53 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:16.814 14:05:53 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:16.814 14:05:53 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:16.814 14:05:53 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:16.814 14:05:53 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:16.814 14:05:53 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:16.814 14:05:53 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:16.814 14:05:53 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:16.814 14:05:53 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:16.814 14:05:53 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:16.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.814 --rc genhtml_branch_coverage=1 00:06:16.814 --rc genhtml_function_coverage=1 00:06:16.814 --rc genhtml_legend=1 00:06:16.814 --rc geninfo_all_blocks=1 00:06:16.814 --rc geninfo_unexecuted_blocks=1 00:06:16.814 00:06:16.814 ' 00:06:16.814 14:05:53 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:16.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.814 --rc genhtml_branch_coverage=1 00:06:16.814 --rc genhtml_function_coverage=1 00:06:16.814 --rc genhtml_legend=1 00:06:16.814 --rc geninfo_all_blocks=1 00:06:16.814 --rc geninfo_unexecuted_blocks=1 00:06:16.814 00:06:16.814 ' 00:06:16.814 14:05:53 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:16.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.814 --rc genhtml_branch_coverage=1 00:06:16.814 --rc genhtml_function_coverage=1 00:06:16.815 --rc genhtml_legend=1 00:06:16.815 --rc geninfo_all_blocks=1 00:06:16.815 --rc geninfo_unexecuted_blocks=1 00:06:16.815 00:06:16.815 ' 00:06:16.815 14:05:53 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:16.815 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:16.815 --rc genhtml_branch_coverage=1 00:06:16.815 --rc genhtml_function_coverage=1 00:06:16.815 --rc genhtml_legend=1 00:06:16.815 --rc geninfo_all_blocks=1 00:06:16.815 --rc geninfo_unexecuted_blocks=1 00:06:16.815 00:06:16.815 ' 00:06:16.815 14:05:53 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:06:16.815 OK 00:06:16.815 14:05:53 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:16.815 00:06:16.815 real 0m0.220s 00:06:16.815 user 0m0.131s 00:06:16.815 sys 0m0.099s 00:06:16.815 14:05:53 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:16.815 14:05:53 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:16.815 ************************************ 00:06:16.815 END TEST rpc_client 00:06:16.815 ************************************ 00:06:16.815 14:05:54 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:16.815 14:05:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:16.815 14:05:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:16.815 14:05:54 -- common/autotest_common.sh@10 -- # set +x 00:06:16.815 ************************************ 00:06:16.815 START TEST json_config 00:06:16.815 ************************************ 00:06:16.815 14:05:54 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:06:17.074 14:05:54 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:17.074 14:05:54 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:17.074 14:05:54 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:17.074 14:05:54 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:17.074 14:05:54 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.074 14:05:54 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.074 14:05:54 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.074 14:05:54 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.074 14:05:54 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.074 14:05:54 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.074 14:05:54 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.074 14:05:54 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.074 14:05:54 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.074 14:05:54 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.074 14:05:54 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.074 14:05:54 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:17.074 14:05:54 json_config -- scripts/common.sh@345 -- # : 1 00:06:17.074 14:05:54 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.074 14:05:54 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.074 14:05:54 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:17.074 14:05:54 json_config -- scripts/common.sh@353 -- # local d=1 00:06:17.074 14:05:54 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.074 14:05:54 json_config -- scripts/common.sh@355 -- # echo 1 00:06:17.074 14:05:54 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.074 14:05:54 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:17.074 14:05:54 json_config -- scripts/common.sh@353 -- # local d=2 00:06:17.074 14:05:54 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.074 14:05:54 json_config -- scripts/common.sh@355 -- # echo 2 00:06:17.074 14:05:54 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.074 14:05:54 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.074 14:05:54 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.074 14:05:54 json_config -- scripts/common.sh@368 -- # return 0 00:06:17.074 14:05:54 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.074 14:05:54 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:17.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.074 --rc genhtml_branch_coverage=1 00:06:17.074 --rc genhtml_function_coverage=1 00:06:17.074 --rc genhtml_legend=1 00:06:17.074 --rc geninfo_all_blocks=1 00:06:17.074 --rc geninfo_unexecuted_blocks=1 00:06:17.074 00:06:17.074 ' 00:06:17.074 14:05:54 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:17.074 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.074 --rc genhtml_branch_coverage=1 00:06:17.074 --rc genhtml_function_coverage=1 00:06:17.074 --rc genhtml_legend=1 00:06:17.075 --rc geninfo_all_blocks=1 00:06:17.075 --rc geninfo_unexecuted_blocks=1 00:06:17.075 00:06:17.075 ' 00:06:17.075 14:05:54 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:17.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.075 --rc genhtml_branch_coverage=1 00:06:17.075 --rc genhtml_function_coverage=1 00:06:17.075 --rc genhtml_legend=1 00:06:17.075 --rc geninfo_all_blocks=1 00:06:17.075 --rc geninfo_unexecuted_blocks=1 00:06:17.075 00:06:17.075 ' 00:06:17.075 14:05:54 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:17.075 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.075 --rc genhtml_branch_coverage=1 00:06:17.075 --rc genhtml_function_coverage=1 00:06:17.075 --rc genhtml_legend=1 00:06:17.075 --rc geninfo_all_blocks=1 00:06:17.075 --rc geninfo_unexecuted_blocks=1 00:06:17.075 00:06:17.075 ' 00:06:17.075 14:05:54 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:17.075 14:05:54 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:17.075 14:05:54 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:17.075 14:05:54 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:17.075 14:05:54 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:17.075 14:05:54 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:17.075 14:05:54 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:17.075 14:05:54 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:17.075 14:05:54 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:17.075 14:05:54 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:17.075 14:05:54 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:17.075 14:05:54 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:17.075 14:05:54 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5c5f7f81-f6ef-45c0-af5d-fb790bbde370 00:06:17.075 14:05:54 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5c5f7f81-f6ef-45c0-af5d-fb790bbde370 00:06:17.075 14:05:54 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:17.075 14:05:54 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:17.075 14:05:54 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:17.075 14:05:54 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:17.075 14:05:54 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:17.075 14:05:54 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:17.075 14:05:54 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:17.075 14:05:54 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:17.075 14:05:54 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:17.075 14:05:54 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.075 14:05:54 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.075 14:05:54 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.075 14:05:54 json_config -- paths/export.sh@5 -- # export PATH 00:06:17.075 14:05:54 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.075 14:05:54 json_config -- nvmf/common.sh@51 -- # : 0 00:06:17.075 14:05:54 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:17.075 14:05:54 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:17.075 14:05:54 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:17.075 14:05:54 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:17.075 14:05:54 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:17.075 14:05:54 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:17.075 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:17.075 14:05:54 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:17.075 14:05:54 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:17.075 14:05:54 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:17.075 14:05:54 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:17.075 14:05:54 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:17.075 14:05:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:17.075 WARNING: No tests are enabled so not running JSON configuration tests 00:06:17.075 14:05:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:17.075 14:05:54 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:17.075 14:05:54 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:06:17.075 14:05:54 json_config -- json_config/json_config.sh@28 -- # exit 0 00:06:17.075 ************************************ 00:06:17.075 END TEST json_config 00:06:17.075 ************************************ 00:06:17.075 00:06:17.075 real 0m0.179s 00:06:17.075 user 0m0.107s 00:06:17.075 sys 0m0.071s 00:06:17.075 14:05:54 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.075 14:05:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:17.075 14:05:54 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:17.075 14:05:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.075 14:05:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.075 14:05:54 -- common/autotest_common.sh@10 -- # set +x 00:06:17.075 ************************************ 00:06:17.075 START TEST json_config_extra_key 00:06:17.075 ************************************ 00:06:17.075 14:05:54 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:06:17.075 14:05:54 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:17.075 14:05:54 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:17.075 14:05:54 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:17.334 14:05:54 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:17.334 14:05:54 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:17.334 14:05:54 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:17.334 14:05:54 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:17.334 14:05:54 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:17.334 14:05:54 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:17.334 14:05:54 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:17.334 14:05:54 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:17.334 14:05:54 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:17.334 14:05:54 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:17.334 14:05:54 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:17.334 14:05:54 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:17.334 14:05:54 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:17.334 14:05:54 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:17.334 14:05:54 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:17.334 14:05:54 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:17.334 14:05:54 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:17.334 14:05:54 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:17.334 14:05:54 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:17.334 14:05:54 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:17.334 14:05:54 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:17.334 14:05:54 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:17.334 14:05:54 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:17.334 14:05:54 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:17.334 14:05:54 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:17.334 14:05:54 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:17.334 14:05:54 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:17.334 14:05:54 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:17.334 14:05:54 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:17.334 14:05:54 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:17.334 14:05:54 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:17.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.334 --rc genhtml_branch_coverage=1 00:06:17.334 --rc genhtml_function_coverage=1 00:06:17.334 --rc genhtml_legend=1 00:06:17.334 --rc geninfo_all_blocks=1 00:06:17.334 --rc geninfo_unexecuted_blocks=1 00:06:17.334 00:06:17.334 ' 00:06:17.334 14:05:54 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:17.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.334 --rc genhtml_branch_coverage=1 00:06:17.334 --rc genhtml_function_coverage=1 00:06:17.334 --rc genhtml_legend=1 00:06:17.334 --rc geninfo_all_blocks=1 00:06:17.334 --rc geninfo_unexecuted_blocks=1 00:06:17.334 00:06:17.334 ' 00:06:17.334 14:05:54 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:17.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.334 --rc genhtml_branch_coverage=1 00:06:17.334 --rc genhtml_function_coverage=1 00:06:17.334 --rc genhtml_legend=1 00:06:17.334 --rc geninfo_all_blocks=1 00:06:17.334 --rc geninfo_unexecuted_blocks=1 00:06:17.334 00:06:17.334 ' 00:06:17.334 14:05:54 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:17.334 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:17.334 --rc genhtml_branch_coverage=1 00:06:17.334 --rc genhtml_function_coverage=1 00:06:17.334 --rc genhtml_legend=1 00:06:17.334 --rc geninfo_all_blocks=1 00:06:17.334 --rc geninfo_unexecuted_blocks=1 00:06:17.334 00:06:17.334 ' 00:06:17.334 14:05:54 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:06:17.334 14:05:54 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:17.334 14:05:54 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:17.334 14:05:54 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:17.334 14:05:54 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:17.334 14:05:54 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:17.334 14:05:54 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:17.334 14:05:54 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:17.334 14:05:54 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:17.334 14:05:54 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:17.334 14:05:54 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:17.334 14:05:54 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:17.334 14:05:54 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5c5f7f81-f6ef-45c0-af5d-fb790bbde370 00:06:17.334 14:05:54 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5c5f7f81-f6ef-45c0-af5d-fb790bbde370 00:06:17.334 14:05:54 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:17.334 14:05:54 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:17.334 14:05:54 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:17.334 14:05:54 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:17.334 14:05:54 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:06:17.334 14:05:54 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:17.334 14:05:54 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:17.334 14:05:54 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:17.334 14:05:54 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:17.334 14:05:54 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.334 14:05:54 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.334 14:05:54 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.334 14:05:54 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:17.334 14:05:54 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:17.335 14:05:54 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:06:17.335 14:05:54 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:17.335 14:05:54 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:17.335 14:05:54 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:17.335 14:05:54 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:17.335 14:05:54 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:17.335 14:05:54 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:17.335 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:17.335 14:05:54 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:17.335 14:05:54 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:17.335 14:05:54 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:17.335 14:05:54 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:06:17.335 14:05:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:17.335 14:05:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:17.335 14:05:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:17.335 14:05:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:17.335 14:05:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:17.335 14:05:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:17.335 14:05:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:06:17.335 14:05:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:17.335 14:05:54 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:17.335 14:05:54 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:17.335 INFO: launching applications... 00:06:17.335 14:05:54 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:17.335 14:05:54 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:17.335 14:05:54 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:17.335 14:05:54 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:17.335 Waiting for target to run... 00:06:17.335 14:05:54 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:17.335 14:05:54 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:17.335 14:05:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:17.335 14:05:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:17.335 14:05:54 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=57471 00:06:17.335 14:05:54 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:06:17.335 14:05:54 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:17.335 14:05:54 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 57471 /var/tmp/spdk_tgt.sock 00:06:17.335 14:05:54 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 57471 ']' 00:06:17.335 14:05:54 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:17.335 14:05:54 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.335 14:05:54 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:17.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:17.335 14:05:54 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.335 14:05:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:17.335 [2024-11-27 14:05:54.565964] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:06:17.335 [2024-11-27 14:05:54.566324] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57471 ] 00:06:17.899 [2024-11-27 14:05:55.016589] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.899 [2024-11-27 14:05:55.132657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.894 14:05:55 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.894 14:05:55 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:18.894 14:05:55 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:18.894 00:06:18.894 INFO: shutting down applications... 00:06:18.894 14:05:55 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:18.894 14:05:55 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:18.894 14:05:55 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:18.894 14:05:55 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:18.894 14:05:55 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 57471 ]] 00:06:18.894 14:05:55 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 57471 00:06:18.894 14:05:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:18.894 14:05:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:18.894 14:05:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57471 00:06:18.894 14:05:55 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:19.158 14:05:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:19.158 14:05:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:19.158 14:05:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57471 00:06:19.158 14:05:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:19.725 14:05:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:19.725 14:05:56 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:19.725 14:05:56 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57471 00:06:19.725 14:05:56 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:20.293 14:05:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:20.293 14:05:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:20.293 14:05:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57471 00:06:20.293 14:05:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:20.860 14:05:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:20.860 14:05:57 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:20.860 14:05:57 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57471 00:06:20.860 14:05:57 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:21.118 14:05:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:21.118 14:05:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:21.118 14:05:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57471 00:06:21.118 14:05:58 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:21.683 14:05:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:21.684 14:05:58 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:21.684 14:05:58 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 57471 00:06:21.684 SPDK target shutdown done 00:06:21.684 Success 00:06:21.684 14:05:58 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:21.684 14:05:58 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:21.684 14:05:58 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:21.684 14:05:58 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:21.684 14:05:58 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:21.684 00:06:21.684 real 0m4.614s 00:06:21.684 user 0m4.132s 00:06:21.684 sys 0m0.624s 00:06:21.684 ************************************ 00:06:21.684 END TEST json_config_extra_key 00:06:21.684 ************************************ 00:06:21.684 14:05:58 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.684 14:05:58 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:21.684 14:05:58 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:21.684 14:05:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.684 14:05:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.684 14:05:58 -- common/autotest_common.sh@10 -- # set +x 00:06:21.684 ************************************ 00:06:21.684 START TEST alias_rpc 00:06:21.684 ************************************ 00:06:21.684 14:05:58 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:21.942 * Looking for test storage... 00:06:21.942 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:06:21.942 14:05:58 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:21.942 14:05:58 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:21.942 14:05:58 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:21.942 14:05:59 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:21.942 14:05:59 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:21.942 14:05:59 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:21.942 14:05:59 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:21.942 14:05:59 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:21.942 14:05:59 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:21.942 14:05:59 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:21.942 14:05:59 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:21.942 14:05:59 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:21.942 14:05:59 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:21.942 14:05:59 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:21.942 14:05:59 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:21.942 14:05:59 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:21.942 14:05:59 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:21.942 14:05:59 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:21.942 14:05:59 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:21.942 14:05:59 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:21.942 14:05:59 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:21.942 14:05:59 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:21.942 14:05:59 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:21.942 14:05:59 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:21.942 14:05:59 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:21.942 14:05:59 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:21.942 14:05:59 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:21.942 14:05:59 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:21.942 14:05:59 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:21.942 14:05:59 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:21.942 14:05:59 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:21.942 14:05:59 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:21.942 14:05:59 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:21.942 14:05:59 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:21.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.942 --rc genhtml_branch_coverage=1 00:06:21.942 --rc genhtml_function_coverage=1 00:06:21.942 --rc genhtml_legend=1 00:06:21.942 --rc geninfo_all_blocks=1 00:06:21.942 --rc geninfo_unexecuted_blocks=1 00:06:21.942 00:06:21.942 ' 00:06:21.942 14:05:59 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:21.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.942 --rc genhtml_branch_coverage=1 00:06:21.942 --rc genhtml_function_coverage=1 00:06:21.942 --rc genhtml_legend=1 00:06:21.942 --rc geninfo_all_blocks=1 00:06:21.942 --rc geninfo_unexecuted_blocks=1 00:06:21.942 00:06:21.942 ' 00:06:21.942 14:05:59 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:21.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.942 --rc genhtml_branch_coverage=1 00:06:21.942 --rc genhtml_function_coverage=1 00:06:21.942 --rc genhtml_legend=1 00:06:21.942 --rc geninfo_all_blocks=1 00:06:21.942 --rc geninfo_unexecuted_blocks=1 00:06:21.942 00:06:21.942 ' 00:06:21.942 14:05:59 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:21.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:21.942 --rc genhtml_branch_coverage=1 00:06:21.942 --rc genhtml_function_coverage=1 00:06:21.942 --rc genhtml_legend=1 00:06:21.942 --rc geninfo_all_blocks=1 00:06:21.942 --rc geninfo_unexecuted_blocks=1 00:06:21.942 00:06:21.942 ' 00:06:21.942 14:05:59 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:21.942 14:05:59 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=57577 00:06:21.942 14:05:59 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:21.942 14:05:59 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 57577 00:06:21.942 14:05:59 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 57577 ']' 00:06:21.942 14:05:59 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.942 14:05:59 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.942 14:05:59 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.942 14:05:59 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.942 14:05:59 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.199 [2024-11-27 14:05:59.230698] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:06:22.199 [2024-11-27 14:05:59.231116] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57577 ] 00:06:22.199 [2024-11-27 14:05:59.406102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.457 [2024-11-27 14:05:59.538978] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.394 14:06:00 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:23.394 14:06:00 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:23.394 14:06:00 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:06:23.654 14:06:00 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 57577 00:06:23.654 14:06:00 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 57577 ']' 00:06:23.654 14:06:00 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 57577 00:06:23.654 14:06:00 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:23.654 14:06:00 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.654 14:06:00 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57577 00:06:23.654 killing process with pid 57577 00:06:23.654 14:06:00 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:23.654 14:06:00 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:23.654 14:06:00 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57577' 00:06:23.654 14:06:00 alias_rpc -- common/autotest_common.sh@973 -- # kill 57577 00:06:23.654 14:06:00 alias_rpc -- common/autotest_common.sh@978 -- # wait 57577 00:06:26.185 ************************************ 00:06:26.185 END TEST alias_rpc 00:06:26.185 ************************************ 00:06:26.185 00:06:26.185 real 0m4.095s 00:06:26.185 user 0m4.266s 00:06:26.185 sys 0m0.628s 00:06:26.185 14:06:03 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.185 14:06:03 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.185 14:06:03 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:26.185 14:06:03 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:26.185 14:06:03 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.185 14:06:03 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.185 14:06:03 -- common/autotest_common.sh@10 -- # set +x 00:06:26.185 ************************************ 00:06:26.185 START TEST spdkcli_tcp 00:06:26.185 ************************************ 00:06:26.185 14:06:03 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:06:26.185 * Looking for test storage... 00:06:26.185 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:06:26.186 14:06:03 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:26.186 14:06:03 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:26.186 14:06:03 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:06:26.186 14:06:03 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:26.186 14:06:03 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.186 14:06:03 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.186 14:06:03 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.186 14:06:03 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.186 14:06:03 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.186 14:06:03 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.186 14:06:03 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.186 14:06:03 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.186 14:06:03 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.186 14:06:03 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.186 14:06:03 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.186 14:06:03 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:26.186 14:06:03 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:06:26.186 14:06:03 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.186 14:06:03 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.186 14:06:03 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:26.186 14:06:03 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:06:26.186 14:06:03 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.186 14:06:03 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:06:26.186 14:06:03 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.186 14:06:03 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:26.186 14:06:03 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:06:26.186 14:06:03 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.186 14:06:03 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:06:26.186 14:06:03 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.186 14:06:03 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.186 14:06:03 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.186 14:06:03 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:06:26.186 14:06:03 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.186 14:06:03 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:26.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.186 --rc genhtml_branch_coverage=1 00:06:26.186 --rc genhtml_function_coverage=1 00:06:26.186 --rc genhtml_legend=1 00:06:26.186 --rc geninfo_all_blocks=1 00:06:26.186 --rc geninfo_unexecuted_blocks=1 00:06:26.186 00:06:26.186 ' 00:06:26.186 14:06:03 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:26.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.186 --rc genhtml_branch_coverage=1 00:06:26.186 --rc genhtml_function_coverage=1 00:06:26.186 --rc genhtml_legend=1 00:06:26.186 --rc geninfo_all_blocks=1 00:06:26.186 --rc geninfo_unexecuted_blocks=1 00:06:26.186 00:06:26.186 ' 00:06:26.186 14:06:03 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:26.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.186 --rc genhtml_branch_coverage=1 00:06:26.186 --rc genhtml_function_coverage=1 00:06:26.186 --rc genhtml_legend=1 00:06:26.186 --rc geninfo_all_blocks=1 00:06:26.186 --rc geninfo_unexecuted_blocks=1 00:06:26.186 00:06:26.186 ' 00:06:26.186 14:06:03 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:26.186 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.186 --rc genhtml_branch_coverage=1 00:06:26.186 --rc genhtml_function_coverage=1 00:06:26.186 --rc genhtml_legend=1 00:06:26.186 --rc geninfo_all_blocks=1 00:06:26.186 --rc geninfo_unexecuted_blocks=1 00:06:26.186 00:06:26.186 ' 00:06:26.186 14:06:03 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:06:26.186 14:06:03 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:06:26.186 14:06:03 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:06:26.186 14:06:03 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:26.186 14:06:03 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:26.186 14:06:03 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:26.186 14:06:03 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:26.186 14:06:03 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:26.186 14:06:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:26.186 14:06:03 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=57684 00:06:26.186 14:06:03 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:26.186 14:06:03 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 57684 00:06:26.186 14:06:03 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 57684 ']' 00:06:26.186 14:06:03 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.186 14:06:03 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.186 14:06:03 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.186 14:06:03 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.186 14:06:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:26.186 [2024-11-27 14:06:03.376561] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:06:26.186 [2024-11-27 14:06:03.377256] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57684 ] 00:06:26.444 [2024-11-27 14:06:03.563577] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:26.444 [2024-11-27 14:06:03.717476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.444 [2024-11-27 14:06:03.717488] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.387 14:06:04 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:27.387 14:06:04 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:06:27.387 14:06:04 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=57707 00:06:27.387 14:06:04 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:27.387 14:06:04 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:27.645 [ 00:06:27.645 "bdev_malloc_delete", 00:06:27.645 "bdev_malloc_create", 00:06:27.645 "bdev_null_resize", 00:06:27.645 "bdev_null_delete", 00:06:27.645 "bdev_null_create", 00:06:27.645 "bdev_nvme_cuse_unregister", 00:06:27.645 "bdev_nvme_cuse_register", 00:06:27.645 "bdev_opal_new_user", 00:06:27.645 "bdev_opal_set_lock_state", 00:06:27.645 "bdev_opal_delete", 00:06:27.645 "bdev_opal_get_info", 00:06:27.645 "bdev_opal_create", 00:06:27.645 "bdev_nvme_opal_revert", 00:06:27.645 "bdev_nvme_opal_init", 00:06:27.645 "bdev_nvme_send_cmd", 00:06:27.645 "bdev_nvme_set_keys", 00:06:27.645 "bdev_nvme_get_path_iostat", 00:06:27.645 "bdev_nvme_get_mdns_discovery_info", 00:06:27.645 "bdev_nvme_stop_mdns_discovery", 00:06:27.645 "bdev_nvme_start_mdns_discovery", 00:06:27.645 "bdev_nvme_set_multipath_policy", 00:06:27.645 "bdev_nvme_set_preferred_path", 00:06:27.645 "bdev_nvme_get_io_paths", 00:06:27.645 "bdev_nvme_remove_error_injection", 00:06:27.645 "bdev_nvme_add_error_injection", 00:06:27.645 "bdev_nvme_get_discovery_info", 00:06:27.645 "bdev_nvme_stop_discovery", 00:06:27.645 "bdev_nvme_start_discovery", 00:06:27.645 "bdev_nvme_get_controller_health_info", 00:06:27.645 "bdev_nvme_disable_controller", 00:06:27.645 "bdev_nvme_enable_controller", 00:06:27.645 "bdev_nvme_reset_controller", 00:06:27.645 "bdev_nvme_get_transport_statistics", 00:06:27.645 "bdev_nvme_apply_firmware", 00:06:27.645 "bdev_nvme_detach_controller", 00:06:27.645 "bdev_nvme_get_controllers", 00:06:27.645 "bdev_nvme_attach_controller", 00:06:27.645 "bdev_nvme_set_hotplug", 00:06:27.645 "bdev_nvme_set_options", 00:06:27.645 "bdev_passthru_delete", 00:06:27.645 "bdev_passthru_create", 00:06:27.645 "bdev_lvol_set_parent_bdev", 00:06:27.645 "bdev_lvol_set_parent", 00:06:27.645 "bdev_lvol_check_shallow_copy", 00:06:27.645 "bdev_lvol_start_shallow_copy", 00:06:27.645 "bdev_lvol_grow_lvstore", 00:06:27.645 "bdev_lvol_get_lvols", 00:06:27.645 "bdev_lvol_get_lvstores", 00:06:27.645 "bdev_lvol_delete", 00:06:27.645 "bdev_lvol_set_read_only", 00:06:27.645 "bdev_lvol_resize", 00:06:27.645 "bdev_lvol_decouple_parent", 00:06:27.645 "bdev_lvol_inflate", 00:06:27.645 "bdev_lvol_rename", 00:06:27.645 "bdev_lvol_clone_bdev", 00:06:27.645 "bdev_lvol_clone", 00:06:27.645 "bdev_lvol_snapshot", 00:06:27.645 "bdev_lvol_create", 00:06:27.645 "bdev_lvol_delete_lvstore", 00:06:27.645 "bdev_lvol_rename_lvstore", 00:06:27.645 "bdev_lvol_create_lvstore", 00:06:27.645 "bdev_raid_set_options", 00:06:27.645 "bdev_raid_remove_base_bdev", 00:06:27.645 "bdev_raid_add_base_bdev", 00:06:27.645 "bdev_raid_delete", 00:06:27.645 "bdev_raid_create", 00:06:27.645 "bdev_raid_get_bdevs", 00:06:27.645 "bdev_error_inject_error", 00:06:27.645 "bdev_error_delete", 00:06:27.645 "bdev_error_create", 00:06:27.645 "bdev_split_delete", 00:06:27.645 "bdev_split_create", 00:06:27.645 "bdev_delay_delete", 00:06:27.645 "bdev_delay_create", 00:06:27.645 "bdev_delay_update_latency", 00:06:27.645 "bdev_zone_block_delete", 00:06:27.645 "bdev_zone_block_create", 00:06:27.645 "blobfs_create", 00:06:27.645 "blobfs_detect", 00:06:27.645 "blobfs_set_cache_size", 00:06:27.645 "bdev_aio_delete", 00:06:27.645 "bdev_aio_rescan", 00:06:27.645 "bdev_aio_create", 00:06:27.645 "bdev_ftl_set_property", 00:06:27.645 "bdev_ftl_get_properties", 00:06:27.645 "bdev_ftl_get_stats", 00:06:27.645 "bdev_ftl_unmap", 00:06:27.645 "bdev_ftl_unload", 00:06:27.645 "bdev_ftl_delete", 00:06:27.645 "bdev_ftl_load", 00:06:27.645 "bdev_ftl_create", 00:06:27.645 "bdev_virtio_attach_controller", 00:06:27.645 "bdev_virtio_scsi_get_devices", 00:06:27.645 "bdev_virtio_detach_controller", 00:06:27.645 "bdev_virtio_blk_set_hotplug", 00:06:27.645 "bdev_iscsi_delete", 00:06:27.645 "bdev_iscsi_create", 00:06:27.645 "bdev_iscsi_set_options", 00:06:27.645 "accel_error_inject_error", 00:06:27.645 "ioat_scan_accel_module", 00:06:27.645 "dsa_scan_accel_module", 00:06:27.646 "iaa_scan_accel_module", 00:06:27.646 "keyring_file_remove_key", 00:06:27.646 "keyring_file_add_key", 00:06:27.646 "keyring_linux_set_options", 00:06:27.646 "fsdev_aio_delete", 00:06:27.646 "fsdev_aio_create", 00:06:27.646 "iscsi_get_histogram", 00:06:27.646 "iscsi_enable_histogram", 00:06:27.646 "iscsi_set_options", 00:06:27.646 "iscsi_get_auth_groups", 00:06:27.646 "iscsi_auth_group_remove_secret", 00:06:27.646 "iscsi_auth_group_add_secret", 00:06:27.646 "iscsi_delete_auth_group", 00:06:27.646 "iscsi_create_auth_group", 00:06:27.646 "iscsi_set_discovery_auth", 00:06:27.646 "iscsi_get_options", 00:06:27.646 "iscsi_target_node_request_logout", 00:06:27.646 "iscsi_target_node_set_redirect", 00:06:27.646 "iscsi_target_node_set_auth", 00:06:27.646 "iscsi_target_node_add_lun", 00:06:27.646 "iscsi_get_stats", 00:06:27.646 "iscsi_get_connections", 00:06:27.646 "iscsi_portal_group_set_auth", 00:06:27.646 "iscsi_start_portal_group", 00:06:27.646 "iscsi_delete_portal_group", 00:06:27.646 "iscsi_create_portal_group", 00:06:27.646 "iscsi_get_portal_groups", 00:06:27.646 "iscsi_delete_target_node", 00:06:27.646 "iscsi_target_node_remove_pg_ig_maps", 00:06:27.646 "iscsi_target_node_add_pg_ig_maps", 00:06:27.646 "iscsi_create_target_node", 00:06:27.646 "iscsi_get_target_nodes", 00:06:27.646 "iscsi_delete_initiator_group", 00:06:27.646 "iscsi_initiator_group_remove_initiators", 00:06:27.646 "iscsi_initiator_group_add_initiators", 00:06:27.646 "iscsi_create_initiator_group", 00:06:27.646 "iscsi_get_initiator_groups", 00:06:27.646 "nvmf_set_crdt", 00:06:27.646 "nvmf_set_config", 00:06:27.646 "nvmf_set_max_subsystems", 00:06:27.646 "nvmf_stop_mdns_prr", 00:06:27.646 "nvmf_publish_mdns_prr", 00:06:27.646 "nvmf_subsystem_get_listeners", 00:06:27.646 "nvmf_subsystem_get_qpairs", 00:06:27.646 "nvmf_subsystem_get_controllers", 00:06:27.646 "nvmf_get_stats", 00:06:27.646 "nvmf_get_transports", 00:06:27.646 "nvmf_create_transport", 00:06:27.646 "nvmf_get_targets", 00:06:27.646 "nvmf_delete_target", 00:06:27.646 "nvmf_create_target", 00:06:27.646 "nvmf_subsystem_allow_any_host", 00:06:27.646 "nvmf_subsystem_set_keys", 00:06:27.646 "nvmf_subsystem_remove_host", 00:06:27.646 "nvmf_subsystem_add_host", 00:06:27.646 "nvmf_ns_remove_host", 00:06:27.646 "nvmf_ns_add_host", 00:06:27.646 "nvmf_subsystem_remove_ns", 00:06:27.646 "nvmf_subsystem_set_ns_ana_group", 00:06:27.646 "nvmf_subsystem_add_ns", 00:06:27.646 "nvmf_subsystem_listener_set_ana_state", 00:06:27.646 "nvmf_discovery_get_referrals", 00:06:27.646 "nvmf_discovery_remove_referral", 00:06:27.646 "nvmf_discovery_add_referral", 00:06:27.646 "nvmf_subsystem_remove_listener", 00:06:27.646 "nvmf_subsystem_add_listener", 00:06:27.646 "nvmf_delete_subsystem", 00:06:27.646 "nvmf_create_subsystem", 00:06:27.646 "nvmf_get_subsystems", 00:06:27.646 "env_dpdk_get_mem_stats", 00:06:27.646 "nbd_get_disks", 00:06:27.646 "nbd_stop_disk", 00:06:27.646 "nbd_start_disk", 00:06:27.646 "ublk_recover_disk", 00:06:27.646 "ublk_get_disks", 00:06:27.646 "ublk_stop_disk", 00:06:27.646 "ublk_start_disk", 00:06:27.646 "ublk_destroy_target", 00:06:27.646 "ublk_create_target", 00:06:27.646 "virtio_blk_create_transport", 00:06:27.646 "virtio_blk_get_transports", 00:06:27.646 "vhost_controller_set_coalescing", 00:06:27.646 "vhost_get_controllers", 00:06:27.646 "vhost_delete_controller", 00:06:27.646 "vhost_create_blk_controller", 00:06:27.646 "vhost_scsi_controller_remove_target", 00:06:27.646 "vhost_scsi_controller_add_target", 00:06:27.646 "vhost_start_scsi_controller", 00:06:27.646 "vhost_create_scsi_controller", 00:06:27.646 "thread_set_cpumask", 00:06:27.646 "scheduler_set_options", 00:06:27.646 "framework_get_governor", 00:06:27.646 "framework_get_scheduler", 00:06:27.646 "framework_set_scheduler", 00:06:27.646 "framework_get_reactors", 00:06:27.646 "thread_get_io_channels", 00:06:27.646 "thread_get_pollers", 00:06:27.646 "thread_get_stats", 00:06:27.646 "framework_monitor_context_switch", 00:06:27.646 "spdk_kill_instance", 00:06:27.646 "log_enable_timestamps", 00:06:27.646 "log_get_flags", 00:06:27.646 "log_clear_flag", 00:06:27.646 "log_set_flag", 00:06:27.646 "log_get_level", 00:06:27.646 "log_set_level", 00:06:27.646 "log_get_print_level", 00:06:27.646 "log_set_print_level", 00:06:27.646 "framework_enable_cpumask_locks", 00:06:27.646 "framework_disable_cpumask_locks", 00:06:27.646 "framework_wait_init", 00:06:27.646 "framework_start_init", 00:06:27.646 "scsi_get_devices", 00:06:27.646 "bdev_get_histogram", 00:06:27.646 "bdev_enable_histogram", 00:06:27.646 "bdev_set_qos_limit", 00:06:27.646 "bdev_set_qd_sampling_period", 00:06:27.646 "bdev_get_bdevs", 00:06:27.646 "bdev_reset_iostat", 00:06:27.646 "bdev_get_iostat", 00:06:27.646 "bdev_examine", 00:06:27.646 "bdev_wait_for_examine", 00:06:27.646 "bdev_set_options", 00:06:27.646 "accel_get_stats", 00:06:27.646 "accel_set_options", 00:06:27.646 "accel_set_driver", 00:06:27.646 "accel_crypto_key_destroy", 00:06:27.646 "accel_crypto_keys_get", 00:06:27.646 "accel_crypto_key_create", 00:06:27.646 "accel_assign_opc", 00:06:27.646 "accel_get_module_info", 00:06:27.646 "accel_get_opc_assignments", 00:06:27.646 "vmd_rescan", 00:06:27.646 "vmd_remove_device", 00:06:27.646 "vmd_enable", 00:06:27.646 "sock_get_default_impl", 00:06:27.646 "sock_set_default_impl", 00:06:27.646 "sock_impl_set_options", 00:06:27.646 "sock_impl_get_options", 00:06:27.646 "iobuf_get_stats", 00:06:27.646 "iobuf_set_options", 00:06:27.646 "keyring_get_keys", 00:06:27.646 "framework_get_pci_devices", 00:06:27.646 "framework_get_config", 00:06:27.646 "framework_get_subsystems", 00:06:27.646 "fsdev_set_opts", 00:06:27.646 "fsdev_get_opts", 00:06:27.646 "trace_get_info", 00:06:27.646 "trace_get_tpoint_group_mask", 00:06:27.646 "trace_disable_tpoint_group", 00:06:27.646 "trace_enable_tpoint_group", 00:06:27.646 "trace_clear_tpoint_mask", 00:06:27.646 "trace_set_tpoint_mask", 00:06:27.646 "notify_get_notifications", 00:06:27.646 "notify_get_types", 00:06:27.646 "spdk_get_version", 00:06:27.646 "rpc_get_methods" 00:06:27.646 ] 00:06:27.646 14:06:04 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:27.646 14:06:04 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:27.646 14:06:04 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:27.904 14:06:04 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:27.904 14:06:04 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 57684 00:06:27.904 14:06:04 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 57684 ']' 00:06:27.904 14:06:04 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 57684 00:06:27.904 14:06:04 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:06:27.904 14:06:04 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.905 14:06:04 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57684 00:06:27.905 killing process with pid 57684 00:06:27.905 14:06:04 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.905 14:06:04 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.905 14:06:04 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57684' 00:06:27.905 14:06:04 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 57684 00:06:27.905 14:06:04 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 57684 00:06:30.439 ************************************ 00:06:30.439 END TEST spdkcli_tcp 00:06:30.439 ************************************ 00:06:30.439 00:06:30.439 real 0m4.139s 00:06:30.439 user 0m7.345s 00:06:30.439 sys 0m0.699s 00:06:30.439 14:06:07 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:30.439 14:06:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:30.439 14:06:07 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:30.439 14:06:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:30.439 14:06:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:30.439 14:06:07 -- common/autotest_common.sh@10 -- # set +x 00:06:30.439 ************************************ 00:06:30.439 START TEST dpdk_mem_utility 00:06:30.439 ************************************ 00:06:30.439 14:06:07 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:30.439 * Looking for test storage... 00:06:30.439 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:06:30.439 14:06:07 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:30.439 14:06:07 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:06:30.439 14:06:07 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:30.439 14:06:07 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:30.439 14:06:07 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:30.439 14:06:07 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:30.439 14:06:07 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:30.439 14:06:07 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:06:30.439 14:06:07 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:06:30.439 14:06:07 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:06:30.439 14:06:07 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:06:30.439 14:06:07 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:06:30.439 14:06:07 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:06:30.439 14:06:07 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:06:30.439 14:06:07 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:30.439 14:06:07 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:06:30.439 14:06:07 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:06:30.439 14:06:07 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:30.439 14:06:07 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:30.439 14:06:07 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:06:30.439 14:06:07 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:06:30.439 14:06:07 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:30.439 14:06:07 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:06:30.439 14:06:07 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:06:30.439 14:06:07 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:06:30.439 14:06:07 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:06:30.439 14:06:07 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:30.439 14:06:07 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:06:30.439 14:06:07 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:06:30.439 14:06:07 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:30.439 14:06:07 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:30.439 14:06:07 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:06:30.439 14:06:07 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:30.439 14:06:07 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:30.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.439 --rc genhtml_branch_coverage=1 00:06:30.439 --rc genhtml_function_coverage=1 00:06:30.439 --rc genhtml_legend=1 00:06:30.439 --rc geninfo_all_blocks=1 00:06:30.439 --rc geninfo_unexecuted_blocks=1 00:06:30.439 00:06:30.439 ' 00:06:30.439 14:06:07 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:30.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.439 --rc genhtml_branch_coverage=1 00:06:30.439 --rc genhtml_function_coverage=1 00:06:30.439 --rc genhtml_legend=1 00:06:30.439 --rc geninfo_all_blocks=1 00:06:30.439 --rc geninfo_unexecuted_blocks=1 00:06:30.439 00:06:30.439 ' 00:06:30.439 14:06:07 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:30.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.439 --rc genhtml_branch_coverage=1 00:06:30.439 --rc genhtml_function_coverage=1 00:06:30.439 --rc genhtml_legend=1 00:06:30.439 --rc geninfo_all_blocks=1 00:06:30.439 --rc geninfo_unexecuted_blocks=1 00:06:30.439 00:06:30.439 ' 00:06:30.439 14:06:07 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:30.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:30.439 --rc genhtml_branch_coverage=1 00:06:30.439 --rc genhtml_function_coverage=1 00:06:30.439 --rc genhtml_legend=1 00:06:30.439 --rc geninfo_all_blocks=1 00:06:30.439 --rc geninfo_unexecuted_blocks=1 00:06:30.439 00:06:30.439 ' 00:06:30.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:30.440 14:06:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:30.440 14:06:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=57806 00:06:30.440 14:06:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:06:30.440 14:06:07 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 57806 00:06:30.440 14:06:07 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 57806 ']' 00:06:30.440 14:06:07 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:30.440 14:06:07 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:30.440 14:06:07 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:30.440 14:06:07 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:30.440 14:06:07 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:30.440 [2024-11-27 14:06:07.565205] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:06:30.440 [2024-11-27 14:06:07.566242] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57806 ] 00:06:30.698 [2024-11-27 14:06:07.739809] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:30.698 [2024-11-27 14:06:07.872643] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.635 14:06:08 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:31.635 14:06:08 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:06:31.635 14:06:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:31.635 14:06:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:31.635 14:06:08 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:31.635 14:06:08 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:31.635 { 00:06:31.635 "filename": "/tmp/spdk_mem_dump.txt" 00:06:31.635 } 00:06:31.635 14:06:08 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:31.635 14:06:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:06:31.635 DPDK memory size 824.000000 MiB in 1 heap(s) 00:06:31.635 1 heaps totaling size 824.000000 MiB 00:06:31.635 size: 824.000000 MiB heap id: 0 00:06:31.635 end heaps---------- 00:06:31.635 9 mempools totaling size 603.782043 MiB 00:06:31.635 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:31.635 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:31.635 size: 100.555481 MiB name: bdev_io_57806 00:06:31.635 size: 50.003479 MiB name: msgpool_57806 00:06:31.635 size: 36.509338 MiB name: fsdev_io_57806 00:06:31.635 size: 21.763794 MiB name: PDU_Pool 00:06:31.635 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:31.635 size: 4.133484 MiB name: evtpool_57806 00:06:31.635 size: 0.026123 MiB name: Session_Pool 00:06:31.635 end mempools------- 00:06:31.635 6 memzones totaling size 4.142822 MiB 00:06:31.635 size: 1.000366 MiB name: RG_ring_0_57806 00:06:31.635 size: 1.000366 MiB name: RG_ring_1_57806 00:06:31.635 size: 1.000366 MiB name: RG_ring_4_57806 00:06:31.635 size: 1.000366 MiB name: RG_ring_5_57806 00:06:31.635 size: 0.125366 MiB name: RG_ring_2_57806 00:06:31.635 size: 0.015991 MiB name: RG_ring_3_57806 00:06:31.635 end memzones------- 00:06:31.635 14:06:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:06:31.895 heap id: 0 total size: 824.000000 MiB number of busy elements: 318 number of free elements: 18 00:06:31.895 list of free elements. size: 16.780640 MiB 00:06:31.895 element at address: 0x200006400000 with size: 1.995972 MiB 00:06:31.895 element at address: 0x20000a600000 with size: 1.995972 MiB 00:06:31.895 element at address: 0x200003e00000 with size: 1.991028 MiB 00:06:31.895 element at address: 0x200019500040 with size: 0.999939 MiB 00:06:31.895 element at address: 0x200019900040 with size: 0.999939 MiB 00:06:31.895 element at address: 0x200019a00000 with size: 0.999084 MiB 00:06:31.895 element at address: 0x200032600000 with size: 0.994324 MiB 00:06:31.895 element at address: 0x200000400000 with size: 0.992004 MiB 00:06:31.895 element at address: 0x200019200000 with size: 0.959656 MiB 00:06:31.895 element at address: 0x200019d00040 with size: 0.936401 MiB 00:06:31.895 element at address: 0x200000200000 with size: 0.716980 MiB 00:06:31.895 element at address: 0x20001b400000 with size: 0.561951 MiB 00:06:31.895 element at address: 0x200000c00000 with size: 0.489197 MiB 00:06:31.895 element at address: 0x200019600000 with size: 0.487976 MiB 00:06:31.895 element at address: 0x200019e00000 with size: 0.485413 MiB 00:06:31.896 element at address: 0x200012c00000 with size: 0.433472 MiB 00:06:31.896 element at address: 0x200028800000 with size: 0.390442 MiB 00:06:31.896 element at address: 0x200000800000 with size: 0.350891 MiB 00:06:31.896 list of standard malloc elements. size: 199.288452 MiB 00:06:31.896 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:06:31.896 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:06:31.896 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:06:31.896 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:06:31.896 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:06:31.896 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:06:31.896 element at address: 0x200019deff40 with size: 0.062683 MiB 00:06:31.896 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:06:31.896 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:06:31.896 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:06:31.896 element at address: 0x200012bff040 with size: 0.000305 MiB 00:06:31.896 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:06:31.896 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:06:31.896 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:06:31.896 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:06:31.896 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:06:31.896 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:06:31.896 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:06:31.896 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:06:31.896 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:06:31.896 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:06:31.896 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:06:31.896 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:06:31.896 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:06:31.896 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:06:31.896 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:06:31.896 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:06:31.896 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:06:31.896 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:06:31.896 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:06:31.896 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:06:31.896 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:06:31.896 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:06:31.896 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:06:31.896 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:06:31.896 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:06:31.896 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:06:31.896 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:06:31.896 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:06:31.896 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:06:31.896 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:06:31.896 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200000cff000 with size: 0.000244 MiB 00:06:31.896 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:06:31.896 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:06:31.896 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:06:31.896 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:06:31.896 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:06:31.896 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:06:31.896 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:06:31.896 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:06:31.896 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:06:31.896 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:06:31.896 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:06:31.896 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:06:31.896 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:06:31.896 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200012bff180 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200012bff280 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200012bff380 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200012bff480 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200012bff580 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200012bff680 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200012bff780 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200012bff880 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200012bff980 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:06:31.896 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:06:31.897 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:06:31.897 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:06:31.897 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:06:31.897 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:06:31.897 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:06:31.897 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:06:31.897 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:06:31.897 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:06:31.897 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:06:31.897 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:06:31.897 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:06:31.897 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:06:31.897 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:06:31.897 element at address: 0x200019affc40 with size: 0.000244 MiB 00:06:31.897 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:06:31.897 element at address: 0x200028863f40 with size: 0.000244 MiB 00:06:31.897 element at address: 0x200028864040 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20002886ad00 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20002886af80 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20002886b080 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20002886b180 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20002886b280 with size: 0.000244 MiB 00:06:31.897 element at address: 0x20002886b380 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886b480 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886b580 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886b680 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886b780 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886b880 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886b980 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886be80 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886c080 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886c180 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886c280 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886c380 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886c480 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886c580 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886c680 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886c780 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886c880 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886c980 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886d080 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886d180 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886d280 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886d380 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886d480 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886d580 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886d680 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886d780 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886d880 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886d980 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886da80 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886db80 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886de80 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886df80 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886e080 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886e180 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886e280 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886e380 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886e480 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886e580 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886e680 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886e780 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886e880 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886e980 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886f080 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886f180 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886f280 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886f380 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886f480 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886f580 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886f680 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886f780 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886f880 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886f980 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:06:31.898 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:06:31.898 list of memzone associated elements. size: 607.930908 MiB 00:06:31.898 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:06:31.898 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:31.898 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:06:31.898 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:31.898 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:06:31.898 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_57806_0 00:06:31.898 element at address: 0x200000dff340 with size: 48.003113 MiB 00:06:31.898 associated memzone info: size: 48.002930 MiB name: MP_msgpool_57806_0 00:06:31.898 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:06:31.898 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_57806_0 00:06:31.898 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:06:31.898 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:31.898 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:06:31.898 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:31.898 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:06:31.898 associated memzone info: size: 3.000122 MiB name: MP_evtpool_57806_0 00:06:31.898 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:06:31.898 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_57806 00:06:31.898 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:06:31.898 associated memzone info: size: 1.007996 MiB name: MP_evtpool_57806 00:06:31.898 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:06:31.898 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:31.898 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:06:31.898 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:31.898 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:06:31.898 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:31.898 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:06:31.898 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:31.898 element at address: 0x200000cff100 with size: 1.000549 MiB 00:06:31.898 associated memzone info: size: 1.000366 MiB name: RG_ring_0_57806 00:06:31.898 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:06:31.898 associated memzone info: size: 1.000366 MiB name: RG_ring_1_57806 00:06:31.898 element at address: 0x200019affd40 with size: 1.000549 MiB 00:06:31.898 associated memzone info: size: 1.000366 MiB name: RG_ring_4_57806 00:06:31.898 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:06:31.898 associated memzone info: size: 1.000366 MiB name: RG_ring_5_57806 00:06:31.898 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:06:31.898 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_57806 00:06:31.898 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:06:31.898 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_57806 00:06:31.898 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:06:31.898 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:31.898 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:06:31.898 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:31.898 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:06:31.898 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:31.898 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:06:31.898 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_57806 00:06:31.898 element at address: 0x20000085df80 with size: 0.125549 MiB 00:06:31.899 associated memzone info: size: 0.125366 MiB name: RG_ring_2_57806 00:06:31.899 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:06:31.899 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:31.899 element at address: 0x200028864140 with size: 0.023804 MiB 00:06:31.899 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:31.899 element at address: 0x200000859d40 with size: 0.016174 MiB 00:06:31.899 associated memzone info: size: 0.015991 MiB name: RG_ring_3_57806 00:06:31.899 element at address: 0x20002886a2c0 with size: 0.002502 MiB 00:06:31.899 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:31.899 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:06:31.899 associated memzone info: size: 0.000183 MiB name: MP_msgpool_57806 00:06:31.899 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:06:31.899 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_57806 00:06:31.899 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:06:31.899 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_57806 00:06:31.899 element at address: 0x20002886ae00 with size: 0.000366 MiB 00:06:31.899 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:31.899 14:06:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:31.899 14:06:08 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 57806 00:06:31.899 14:06:08 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 57806 ']' 00:06:31.899 14:06:08 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 57806 00:06:31.899 14:06:08 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:06:31.899 14:06:08 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:31.899 14:06:08 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 57806 00:06:31.899 killing process with pid 57806 00:06:31.899 14:06:08 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:31.899 14:06:08 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:31.899 14:06:08 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 57806' 00:06:31.899 14:06:08 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 57806 00:06:31.899 14:06:08 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 57806 00:06:34.429 ************************************ 00:06:34.429 END TEST dpdk_mem_utility 00:06:34.429 ************************************ 00:06:34.429 00:06:34.429 real 0m3.940s 00:06:34.429 user 0m4.000s 00:06:34.429 sys 0m0.625s 00:06:34.429 14:06:11 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:34.429 14:06:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:34.429 14:06:11 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:34.429 14:06:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:34.429 14:06:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.429 14:06:11 -- common/autotest_common.sh@10 -- # set +x 00:06:34.429 ************************************ 00:06:34.429 START TEST event 00:06:34.429 ************************************ 00:06:34.429 14:06:11 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:06:34.429 * Looking for test storage... 00:06:34.429 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:06:34.429 14:06:11 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:34.429 14:06:11 event -- common/autotest_common.sh@1693 -- # lcov --version 00:06:34.429 14:06:11 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:34.429 14:06:11 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:34.429 14:06:11 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:34.429 14:06:11 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:34.429 14:06:11 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:34.429 14:06:11 event -- scripts/common.sh@336 -- # IFS=.-: 00:06:34.429 14:06:11 event -- scripts/common.sh@336 -- # read -ra ver1 00:06:34.429 14:06:11 event -- scripts/common.sh@337 -- # IFS=.-: 00:06:34.429 14:06:11 event -- scripts/common.sh@337 -- # read -ra ver2 00:06:34.429 14:06:11 event -- scripts/common.sh@338 -- # local 'op=<' 00:06:34.429 14:06:11 event -- scripts/common.sh@340 -- # ver1_l=2 00:06:34.429 14:06:11 event -- scripts/common.sh@341 -- # ver2_l=1 00:06:34.429 14:06:11 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:34.429 14:06:11 event -- scripts/common.sh@344 -- # case "$op" in 00:06:34.429 14:06:11 event -- scripts/common.sh@345 -- # : 1 00:06:34.429 14:06:11 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:34.429 14:06:11 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:34.429 14:06:11 event -- scripts/common.sh@365 -- # decimal 1 00:06:34.429 14:06:11 event -- scripts/common.sh@353 -- # local d=1 00:06:34.429 14:06:11 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:34.429 14:06:11 event -- scripts/common.sh@355 -- # echo 1 00:06:34.429 14:06:11 event -- scripts/common.sh@365 -- # ver1[v]=1 00:06:34.429 14:06:11 event -- scripts/common.sh@366 -- # decimal 2 00:06:34.429 14:06:11 event -- scripts/common.sh@353 -- # local d=2 00:06:34.429 14:06:11 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:34.429 14:06:11 event -- scripts/common.sh@355 -- # echo 2 00:06:34.429 14:06:11 event -- scripts/common.sh@366 -- # ver2[v]=2 00:06:34.429 14:06:11 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:34.429 14:06:11 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:34.429 14:06:11 event -- scripts/common.sh@368 -- # return 0 00:06:34.429 14:06:11 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:34.429 14:06:11 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:34.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.429 --rc genhtml_branch_coverage=1 00:06:34.429 --rc genhtml_function_coverage=1 00:06:34.429 --rc genhtml_legend=1 00:06:34.429 --rc geninfo_all_blocks=1 00:06:34.429 --rc geninfo_unexecuted_blocks=1 00:06:34.429 00:06:34.429 ' 00:06:34.429 14:06:11 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:34.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.429 --rc genhtml_branch_coverage=1 00:06:34.429 --rc genhtml_function_coverage=1 00:06:34.429 --rc genhtml_legend=1 00:06:34.429 --rc geninfo_all_blocks=1 00:06:34.429 --rc geninfo_unexecuted_blocks=1 00:06:34.429 00:06:34.429 ' 00:06:34.429 14:06:11 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:34.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.429 --rc genhtml_branch_coverage=1 00:06:34.429 --rc genhtml_function_coverage=1 00:06:34.429 --rc genhtml_legend=1 00:06:34.429 --rc geninfo_all_blocks=1 00:06:34.429 --rc geninfo_unexecuted_blocks=1 00:06:34.429 00:06:34.429 ' 00:06:34.429 14:06:11 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:34.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:34.429 --rc genhtml_branch_coverage=1 00:06:34.429 --rc genhtml_function_coverage=1 00:06:34.429 --rc genhtml_legend=1 00:06:34.429 --rc geninfo_all_blocks=1 00:06:34.429 --rc geninfo_unexecuted_blocks=1 00:06:34.429 00:06:34.429 ' 00:06:34.429 14:06:11 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:34.429 14:06:11 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:34.429 14:06:11 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:34.429 14:06:11 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:06:34.429 14:06:11 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:34.429 14:06:11 event -- common/autotest_common.sh@10 -- # set +x 00:06:34.429 ************************************ 00:06:34.429 START TEST event_perf 00:06:34.429 ************************************ 00:06:34.429 14:06:11 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:34.429 Running I/O for 1 seconds...[2024-11-27 14:06:11.466962] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:06:34.430 [2024-11-27 14:06:11.467276] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57914 ] 00:06:34.430 [2024-11-27 14:06:11.649713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:34.688 [2024-11-27 14:06:11.797411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:34.688 [2024-11-27 14:06:11.797527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:34.688 [2024-11-27 14:06:11.797629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:34.688 Running I/O for 1 seconds...[2024-11-27 14:06:11.797637] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.066 00:06:36.066 lcore 0: 192718 00:06:36.066 lcore 1: 192718 00:06:36.066 lcore 2: 192719 00:06:36.066 lcore 3: 192719 00:06:36.066 done. 00:06:36.066 00:06:36.066 real 0m1.623s 00:06:36.066 user 0m4.376s 00:06:36.066 sys 0m0.118s 00:06:36.066 ************************************ 00:06:36.066 END TEST event_perf 00:06:36.066 ************************************ 00:06:36.066 14:06:13 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:36.066 14:06:13 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:36.066 14:06:13 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:36.066 14:06:13 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:36.066 14:06:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.066 14:06:13 event -- common/autotest_common.sh@10 -- # set +x 00:06:36.066 ************************************ 00:06:36.066 START TEST event_reactor 00:06:36.066 ************************************ 00:06:36.066 14:06:13 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:06:36.066 [2024-11-27 14:06:13.132754] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:06:36.066 [2024-11-27 14:06:13.132928] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57959 ] 00:06:36.066 [2024-11-27 14:06:13.319271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.323 [2024-11-27 14:06:13.447288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.775 test_start 00:06:37.775 oneshot 00:06:37.775 tick 100 00:06:37.775 tick 100 00:06:37.775 tick 250 00:06:37.775 tick 100 00:06:37.775 tick 100 00:06:37.775 tick 100 00:06:37.775 tick 250 00:06:37.775 tick 500 00:06:37.775 tick 100 00:06:37.775 tick 100 00:06:37.775 tick 250 00:06:37.775 tick 100 00:06:37.775 tick 100 00:06:37.775 test_end 00:06:37.775 00:06:37.775 real 0m1.587s 00:06:37.775 user 0m1.381s 00:06:37.775 sys 0m0.097s 00:06:37.775 14:06:14 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.775 ************************************ 00:06:37.775 END TEST event_reactor 00:06:37.775 ************************************ 00:06:37.775 14:06:14 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:37.775 14:06:14 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:37.775 14:06:14 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:06:37.775 14:06:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.775 14:06:14 event -- common/autotest_common.sh@10 -- # set +x 00:06:37.775 ************************************ 00:06:37.775 START TEST event_reactor_perf 00:06:37.775 ************************************ 00:06:37.775 14:06:14 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:37.775 [2024-11-27 14:06:14.777512] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:06:37.775 [2024-11-27 14:06:14.777734] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid57996 ] 00:06:37.775 [2024-11-27 14:06:14.962159] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.034 [2024-11-27 14:06:15.131337] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.411 test_start 00:06:39.411 test_end 00:06:39.411 Performance: 276119 events per second 00:06:39.411 00:06:39.411 real 0m1.638s 00:06:39.411 user 0m1.416s 00:06:39.411 sys 0m0.112s 00:06:39.412 14:06:16 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.412 14:06:16 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:39.412 ************************************ 00:06:39.412 END TEST event_reactor_perf 00:06:39.412 ************************************ 00:06:39.412 14:06:16 event -- event/event.sh@49 -- # uname -s 00:06:39.412 14:06:16 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:39.412 14:06:16 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:39.412 14:06:16 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.412 14:06:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.412 14:06:16 event -- common/autotest_common.sh@10 -- # set +x 00:06:39.412 ************************************ 00:06:39.412 START TEST event_scheduler 00:06:39.412 ************************************ 00:06:39.412 14:06:16 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:06:39.412 * Looking for test storage... 00:06:39.412 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:06:39.412 14:06:16 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:39.412 14:06:16 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:39.412 14:06:16 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:06:39.412 14:06:16 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:39.412 14:06:16 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:39.412 14:06:16 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:39.412 14:06:16 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:39.412 14:06:16 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:06:39.412 14:06:16 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:06:39.412 14:06:16 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:06:39.412 14:06:16 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:06:39.412 14:06:16 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:06:39.412 14:06:16 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:06:39.412 14:06:16 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:06:39.412 14:06:16 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:39.412 14:06:16 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:06:39.412 14:06:16 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:06:39.412 14:06:16 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:39.412 14:06:16 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:39.412 14:06:16 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:06:39.412 14:06:16 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:06:39.412 14:06:16 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:39.412 14:06:16 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:06:39.412 14:06:16 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:06:39.412 14:06:16 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:06:39.412 14:06:16 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:06:39.412 14:06:16 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:39.412 14:06:16 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:06:39.412 14:06:16 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:06:39.412 14:06:16 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:39.412 14:06:16 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:39.412 14:06:16 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:06:39.412 14:06:16 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:39.412 14:06:16 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:39.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.412 --rc genhtml_branch_coverage=1 00:06:39.412 --rc genhtml_function_coverage=1 00:06:39.412 --rc genhtml_legend=1 00:06:39.412 --rc geninfo_all_blocks=1 00:06:39.412 --rc geninfo_unexecuted_blocks=1 00:06:39.412 00:06:39.412 ' 00:06:39.412 14:06:16 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:39.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.412 --rc genhtml_branch_coverage=1 00:06:39.412 --rc genhtml_function_coverage=1 00:06:39.412 --rc genhtml_legend=1 00:06:39.412 --rc geninfo_all_blocks=1 00:06:39.412 --rc geninfo_unexecuted_blocks=1 00:06:39.412 00:06:39.412 ' 00:06:39.412 14:06:16 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:39.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.412 --rc genhtml_branch_coverage=1 00:06:39.412 --rc genhtml_function_coverage=1 00:06:39.412 --rc genhtml_legend=1 00:06:39.412 --rc geninfo_all_blocks=1 00:06:39.412 --rc geninfo_unexecuted_blocks=1 00:06:39.412 00:06:39.412 ' 00:06:39.412 14:06:16 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:39.412 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:39.412 --rc genhtml_branch_coverage=1 00:06:39.412 --rc genhtml_function_coverage=1 00:06:39.412 --rc genhtml_legend=1 00:06:39.412 --rc geninfo_all_blocks=1 00:06:39.412 --rc geninfo_unexecuted_blocks=1 00:06:39.412 00:06:39.412 ' 00:06:39.412 14:06:16 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:39.412 14:06:16 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=58066 00:06:39.412 14:06:16 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:39.412 14:06:16 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:39.412 14:06:16 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 58066 00:06:39.412 14:06:16 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 58066 ']' 00:06:39.412 14:06:16 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.412 14:06:16 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.412 14:06:16 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.412 14:06:16 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.412 14:06:16 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:39.671 [2024-11-27 14:06:16.723881] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:06:39.671 [2024-11-27 14:06:16.724056] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58066 ] 00:06:39.671 [2024-11-27 14:06:16.913631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:39.929 [2024-11-27 14:06:17.075802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.929 [2024-11-27 14:06:17.075979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:39.929 [2024-11-27 14:06:17.076106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:39.929 [2024-11-27 14:06:17.076795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.498 14:06:17 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.498 14:06:17 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:06:40.498 14:06:17 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:40.498 14:06:17 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.498 14:06:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:40.498 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:40.498 POWER: Cannot set governor of lcore 0 to userspace 00:06:40.498 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:40.498 POWER: Cannot set governor of lcore 0 to performance 00:06:40.498 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:40.498 POWER: Cannot set governor of lcore 0 to userspace 00:06:40.498 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:06:40.498 POWER: Cannot set governor of lcore 0 to userspace 00:06:40.498 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:06:40.498 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:06:40.498 POWER: Unable to set Power Management Environment for lcore 0 00:06:40.498 [2024-11-27 14:06:17.659061] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:06:40.498 [2024-11-27 14:06:17.659092] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:06:40.498 [2024-11-27 14:06:17.659107] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:06:40.498 [2024-11-27 14:06:17.659132] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:40.498 [2024-11-27 14:06:17.659144] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:40.498 [2024-11-27 14:06:17.659158] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:40.498 14:06:17 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.498 14:06:17 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:40.498 14:06:17 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.498 14:06:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:40.757 [2024-11-27 14:06:17.997126] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:40.757 14:06:17 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.757 14:06:17 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:40.757 14:06:17 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.757 14:06:17 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.757 14:06:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:40.757 ************************************ 00:06:40.757 START TEST scheduler_create_thread 00:06:40.757 ************************************ 00:06:40.757 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:06:40.757 14:06:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:40.757 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.757 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.757 2 00:06:40.757 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.757 14:06:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:40.757 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.757 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:40.757 3 00:06:40.757 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:40.757 14:06:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:40.757 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:40.757 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.015 4 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.016 5 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.016 6 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.016 7 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.016 8 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.016 9 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.016 10 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:41.016 14:06:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:42.393 14:06:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:42.393 14:06:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:42.393 14:06:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:42.393 14:06:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:42.393 14:06:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.771 ************************************ 00:06:43.771 END TEST scheduler_create_thread 00:06:43.771 ************************************ 00:06:43.771 14:06:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:43.771 00:06:43.771 real 0m2.619s 00:06:43.771 user 0m0.015s 00:06:43.771 sys 0m0.011s 00:06:43.771 14:06:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:43.771 14:06:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:43.771 14:06:20 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:43.771 14:06:20 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 58066 00:06:43.771 14:06:20 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 58066 ']' 00:06:43.771 14:06:20 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 58066 00:06:43.771 14:06:20 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:06:43.771 14:06:20 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:43.771 14:06:20 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58066 00:06:43.771 killing process with pid 58066 00:06:43.771 14:06:20 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:43.771 14:06:20 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:43.771 14:06:20 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58066' 00:06:43.771 14:06:20 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 58066 00:06:43.771 14:06:20 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 58066 00:06:44.030 [2024-11-27 14:06:21.107619] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:44.967 00:06:44.967 real 0m5.772s 00:06:44.967 user 0m9.903s 00:06:44.967 sys 0m0.512s 00:06:44.967 14:06:22 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:44.967 14:06:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:44.967 ************************************ 00:06:44.967 END TEST event_scheduler 00:06:44.967 ************************************ 00:06:44.967 14:06:22 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:44.967 14:06:22 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:44.967 14:06:22 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:44.967 14:06:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:44.967 14:06:22 event -- common/autotest_common.sh@10 -- # set +x 00:06:44.967 ************************************ 00:06:44.967 START TEST app_repeat 00:06:44.967 ************************************ 00:06:44.967 14:06:22 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:06:44.967 14:06:22 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.967 14:06:22 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:44.967 14:06:22 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:44.967 14:06:22 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:44.967 14:06:22 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:44.967 14:06:22 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:44.967 14:06:22 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:45.225 Process app_repeat pid: 58178 00:06:45.225 spdk_app_start Round 0 00:06:45.225 14:06:22 event.app_repeat -- event/event.sh@19 -- # repeat_pid=58178 00:06:45.225 14:06:22 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:45.225 14:06:22 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 58178' 00:06:45.226 14:06:22 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:45.226 14:06:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:45.226 14:06:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:45.226 14:06:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58178 /var/tmp/spdk-nbd.sock 00:06:45.226 14:06:22 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58178 ']' 00:06:45.226 14:06:22 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:45.226 14:06:22 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:45.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:45.226 14:06:22 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:45.226 14:06:22 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:45.226 14:06:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:45.226 [2024-11-27 14:06:22.297903] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:06:45.226 [2024-11-27 14:06:22.299361] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58178 ] 00:06:45.226 [2024-11-27 14:06:22.474590] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:45.484 [2024-11-27 14:06:22.607297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.484 [2024-11-27 14:06:22.607300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.419 14:06:23 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.419 14:06:23 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:46.419 14:06:23 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:46.678 Malloc0 00:06:46.678 14:06:23 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:46.938 Malloc1 00:06:46.938 14:06:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.938 14:06:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.938 14:06:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.938 14:06:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:46.938 14:06:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.938 14:06:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:46.938 14:06:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:46.938 14:06:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.938 14:06:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:46.938 14:06:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:46.938 14:06:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:46.938 14:06:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:46.938 14:06:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:46.938 14:06:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:46.938 14:06:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:46.938 14:06:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:47.207 /dev/nbd0 00:06:47.207 14:06:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:47.207 14:06:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:47.207 14:06:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:47.207 14:06:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:47.207 14:06:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:47.207 14:06:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:47.207 14:06:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:47.207 14:06:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:47.207 14:06:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:47.207 14:06:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:47.207 14:06:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:47.207 1+0 records in 00:06:47.207 1+0 records out 00:06:47.207 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000527505 s, 7.8 MB/s 00:06:47.207 14:06:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:47.207 14:06:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:47.207 14:06:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:47.207 14:06:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:47.207 14:06:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:47.207 14:06:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.207 14:06:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:47.207 14:06:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:47.774 /dev/nbd1 00:06:47.774 14:06:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:47.774 14:06:24 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:47.774 14:06:24 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:47.774 14:06:24 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:47.774 14:06:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:47.774 14:06:24 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:47.774 14:06:24 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:47.774 14:06:24 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:47.774 14:06:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:47.774 14:06:24 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:47.774 14:06:24 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:47.774 1+0 records in 00:06:47.774 1+0 records out 00:06:47.774 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000375004 s, 10.9 MB/s 00:06:47.774 14:06:24 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:47.774 14:06:24 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:47.774 14:06:24 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:47.774 14:06:24 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:47.774 14:06:24 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:47.774 14:06:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:47.774 14:06:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:47.774 14:06:24 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:47.774 14:06:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.774 14:06:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:48.032 14:06:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:48.032 { 00:06:48.032 "nbd_device": "/dev/nbd0", 00:06:48.032 "bdev_name": "Malloc0" 00:06:48.032 }, 00:06:48.032 { 00:06:48.032 "nbd_device": "/dev/nbd1", 00:06:48.032 "bdev_name": "Malloc1" 00:06:48.032 } 00:06:48.032 ]' 00:06:48.032 14:06:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:48.032 { 00:06:48.032 "nbd_device": "/dev/nbd0", 00:06:48.032 "bdev_name": "Malloc0" 00:06:48.032 }, 00:06:48.032 { 00:06:48.032 "nbd_device": "/dev/nbd1", 00:06:48.032 "bdev_name": "Malloc1" 00:06:48.032 } 00:06:48.032 ]' 00:06:48.033 14:06:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:48.033 14:06:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:48.033 /dev/nbd1' 00:06:48.033 14:06:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:48.033 14:06:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:48.033 /dev/nbd1' 00:06:48.033 14:06:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:48.033 14:06:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:48.033 14:06:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:48.033 14:06:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:48.033 14:06:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:48.033 14:06:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.033 14:06:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:48.033 14:06:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:48.033 14:06:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:48.033 14:06:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:48.033 14:06:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:48.033 256+0 records in 00:06:48.033 256+0 records out 00:06:48.033 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00586682 s, 179 MB/s 00:06:48.033 14:06:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:48.033 14:06:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:48.033 256+0 records in 00:06:48.033 256+0 records out 00:06:48.033 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0304676 s, 34.4 MB/s 00:06:48.033 14:06:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:48.033 14:06:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:48.033 256+0 records in 00:06:48.033 256+0 records out 00:06:48.033 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0342855 s, 30.6 MB/s 00:06:48.033 14:06:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:48.033 14:06:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.033 14:06:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:48.033 14:06:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:48.033 14:06:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:48.033 14:06:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:48.033 14:06:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:48.033 14:06:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:48.033 14:06:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:48.033 14:06:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:48.033 14:06:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:48.291 14:06:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:48.291 14:06:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:48.291 14:06:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.291 14:06:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:48.291 14:06:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:48.291 14:06:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:48.291 14:06:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.291 14:06:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:48.550 14:06:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:48.550 14:06:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:48.550 14:06:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:48.550 14:06:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.550 14:06:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.550 14:06:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:48.550 14:06:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:48.550 14:06:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.550 14:06:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.550 14:06:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:48.808 14:06:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:48.808 14:06:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:48.808 14:06:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:48.808 14:06:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.808 14:06:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.808 14:06:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:48.808 14:06:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:48.808 14:06:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.808 14:06:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:48.808 14:06:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.808 14:06:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:49.066 14:06:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:49.066 14:06:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:49.066 14:06:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:49.066 14:06:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:49.066 14:06:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:49.066 14:06:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:49.066 14:06:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:49.066 14:06:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:49.066 14:06:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:49.066 14:06:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:49.066 14:06:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:49.066 14:06:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:49.066 14:06:26 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:49.633 14:06:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:50.643 [2024-11-27 14:06:27.833528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:50.900 [2024-11-27 14:06:27.960082] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.900 [2024-11-27 14:06:27.960095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.900 [2024-11-27 14:06:28.149043] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:50.900 [2024-11-27 14:06:28.149171] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:52.799 14:06:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:52.799 spdk_app_start Round 1 00:06:52.799 14:06:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:52.799 14:06:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58178 /var/tmp/spdk-nbd.sock 00:06:52.799 14:06:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58178 ']' 00:06:52.799 14:06:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:52.799 14:06:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:52.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:52.799 14:06:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:52.799 14:06:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:52.799 14:06:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:52.799 14:06:30 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:52.799 14:06:30 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:52.799 14:06:30 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:53.366 Malloc0 00:06:53.366 14:06:30 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:53.623 Malloc1 00:06:53.623 14:06:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:53.623 14:06:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.623 14:06:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:53.623 14:06:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:53.623 14:06:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.623 14:06:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:53.623 14:06:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:53.623 14:06:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:53.623 14:06:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:53.623 14:06:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:53.623 14:06:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:53.624 14:06:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:53.624 14:06:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:53.624 14:06:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:53.624 14:06:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:53.624 14:06:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:53.881 /dev/nbd0 00:06:53.882 14:06:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:53.882 14:06:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:53.882 14:06:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:53.882 14:06:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:53.882 14:06:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:53.882 14:06:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:53.882 14:06:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:53.882 14:06:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:53.882 14:06:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:53.882 14:06:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:53.882 14:06:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:53.882 1+0 records in 00:06:53.882 1+0 records out 00:06:53.882 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000333991 s, 12.3 MB/s 00:06:53.882 14:06:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:53.882 14:06:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:53.882 14:06:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:53.882 14:06:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:53.882 14:06:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:53.882 14:06:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:53.882 14:06:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:53.882 14:06:31 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:54.140 /dev/nbd1 00:06:54.399 14:06:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:54.399 14:06:31 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:54.399 14:06:31 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:54.399 14:06:31 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:54.399 14:06:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:54.399 14:06:31 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:54.399 14:06:31 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:54.399 14:06:31 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:54.399 14:06:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:54.399 14:06:31 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:54.399 14:06:31 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:54.399 1+0 records in 00:06:54.399 1+0 records out 00:06:54.399 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000357663 s, 11.5 MB/s 00:06:54.399 14:06:31 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:54.399 14:06:31 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:54.399 14:06:31 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:06:54.399 14:06:31 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:54.399 14:06:31 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:54.399 14:06:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:54.399 14:06:31 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:54.399 14:06:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:54.399 14:06:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.399 14:06:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:54.658 { 00:06:54.658 "nbd_device": "/dev/nbd0", 00:06:54.658 "bdev_name": "Malloc0" 00:06:54.658 }, 00:06:54.658 { 00:06:54.658 "nbd_device": "/dev/nbd1", 00:06:54.658 "bdev_name": "Malloc1" 00:06:54.658 } 00:06:54.658 ]' 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:54.658 { 00:06:54.658 "nbd_device": "/dev/nbd0", 00:06:54.658 "bdev_name": "Malloc0" 00:06:54.658 }, 00:06:54.658 { 00:06:54.658 "nbd_device": "/dev/nbd1", 00:06:54.658 "bdev_name": "Malloc1" 00:06:54.658 } 00:06:54.658 ]' 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:54.658 /dev/nbd1' 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:54.658 /dev/nbd1' 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:54.658 256+0 records in 00:06:54.658 256+0 records out 00:06:54.658 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00863108 s, 121 MB/s 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:54.658 256+0 records in 00:06:54.658 256+0 records out 00:06:54.658 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0303562 s, 34.5 MB/s 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:54.658 256+0 records in 00:06:54.658 256+0 records out 00:06:54.658 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0317526 s, 33.0 MB/s 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.658 14:06:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:54.917 14:06:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:54.917 14:06:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:54.917 14:06:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:54.917 14:06:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:54.917 14:06:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:54.917 14:06:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:54.917 14:06:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:54.917 14:06:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:54.917 14:06:32 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:54.917 14:06:32 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:55.176 14:06:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:55.474 14:06:32 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:55.474 14:06:32 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:55.474 14:06:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:55.474 14:06:32 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:55.474 14:06:32 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:55.474 14:06:32 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:55.474 14:06:32 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:55.474 14:06:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:55.474 14:06:32 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:55.474 14:06:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:55.754 14:06:32 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:55.754 14:06:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:55.754 14:06:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:55.754 14:06:32 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:55.754 14:06:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:55.754 14:06:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:55.754 14:06:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:55.754 14:06:32 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:55.754 14:06:32 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:55.754 14:06:32 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:55.754 14:06:32 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:55.754 14:06:32 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:55.754 14:06:32 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:56.321 14:06:33 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:57.259 [2024-11-27 14:06:34.401607] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:57.259 [2024-11-27 14:06:34.531053] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:57.259 [2024-11-27 14:06:34.531059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.519 [2024-11-27 14:06:34.723932] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:57.519 [2024-11-27 14:06:34.724054] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:59.425 14:06:36 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:59.425 spdk_app_start Round 2 00:06:59.425 14:06:36 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:59.425 14:06:36 event.app_repeat -- event/event.sh@25 -- # waitforlisten 58178 /var/tmp/spdk-nbd.sock 00:06:59.425 14:06:36 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58178 ']' 00:06:59.425 14:06:36 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:59.425 14:06:36 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:59.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:59.425 14:06:36 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:59.425 14:06:36 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:59.425 14:06:36 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:59.425 14:06:36 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.425 14:06:36 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:59.425 14:06:36 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:59.994 Malloc0 00:06:59.994 14:06:37 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:00.253 Malloc1 00:07:00.253 14:06:37 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:00.253 14:06:37 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.253 14:06:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:00.253 14:06:37 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:00.253 14:06:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.253 14:06:37 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:00.253 14:06:37 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:00.253 14:06:37 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:00.253 14:06:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:00.253 14:06:37 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:00.253 14:06:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:00.253 14:06:37 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:00.253 14:06:37 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:00.253 14:06:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:00.253 14:06:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:00.253 14:06:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:00.512 /dev/nbd0 00:07:00.512 14:06:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:00.512 14:06:37 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:00.512 14:06:37 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:00.512 14:06:37 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:00.512 14:06:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:00.512 14:06:37 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:00.512 14:06:37 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:00.512 14:06:37 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:00.512 14:06:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:00.512 14:06:37 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:00.512 14:06:37 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:00.512 1+0 records in 00:07:00.512 1+0 records out 00:07:00.512 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000426327 s, 9.6 MB/s 00:07:00.512 14:06:37 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:00.512 14:06:37 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:00.512 14:06:37 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:00.512 14:06:37 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:00.512 14:06:37 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:00.512 14:06:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:00.512 14:06:37 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:00.513 14:06:37 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:01.121 /dev/nbd1 00:07:01.121 14:06:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:01.121 14:06:38 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:01.121 14:06:38 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:01.121 14:06:38 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:01.121 14:06:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:01.121 14:06:38 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:01.121 14:06:38 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:01.121 14:06:38 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:01.121 14:06:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:01.121 14:06:38 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:01.121 14:06:38 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:01.121 1+0 records in 00:07:01.121 1+0 records out 00:07:01.121 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416692 s, 9.8 MB/s 00:07:01.121 14:06:38 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:01.121 14:06:38 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:01.121 14:06:38 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:07:01.121 14:06:38 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:01.121 14:06:38 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:01.121 14:06:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:01.121 14:06:38 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:01.121 14:06:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:01.122 14:06:38 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.122 14:06:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:01.381 14:06:38 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:01.381 { 00:07:01.381 "nbd_device": "/dev/nbd0", 00:07:01.381 "bdev_name": "Malloc0" 00:07:01.381 }, 00:07:01.381 { 00:07:01.381 "nbd_device": "/dev/nbd1", 00:07:01.381 "bdev_name": "Malloc1" 00:07:01.381 } 00:07:01.381 ]' 00:07:01.381 14:06:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:01.381 14:06:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:01.381 { 00:07:01.381 "nbd_device": "/dev/nbd0", 00:07:01.381 "bdev_name": "Malloc0" 00:07:01.381 }, 00:07:01.381 { 00:07:01.381 "nbd_device": "/dev/nbd1", 00:07:01.381 "bdev_name": "Malloc1" 00:07:01.381 } 00:07:01.381 ]' 00:07:01.381 14:06:38 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:01.381 /dev/nbd1' 00:07:01.381 14:06:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:01.381 14:06:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:01.381 /dev/nbd1' 00:07:01.381 14:06:38 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:01.381 14:06:38 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:01.381 14:06:38 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:01.381 14:06:38 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:01.381 14:06:38 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:01.381 14:06:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.381 14:06:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:01.381 14:06:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:01.381 14:06:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:01.381 14:06:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:01.381 14:06:38 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:01.381 256+0 records in 00:07:01.381 256+0 records out 00:07:01.381 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0104734 s, 100 MB/s 00:07:01.381 14:06:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.381 14:06:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:01.381 256+0 records in 00:07:01.381 256+0 records out 00:07:01.381 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.029526 s, 35.5 MB/s 00:07:01.381 14:06:38 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:01.381 14:06:38 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:01.640 256+0 records in 00:07:01.640 256+0 records out 00:07:01.640 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0308215 s, 34.0 MB/s 00:07:01.640 14:06:38 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:01.640 14:06:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.640 14:06:38 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:01.640 14:06:38 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:01.640 14:06:38 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:01.640 14:06:38 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:01.640 14:06:38 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:01.640 14:06:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.640 14:06:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:07:01.640 14:06:38 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:01.640 14:06:38 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:07:01.640 14:06:38 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:07:01.640 14:06:38 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:01.640 14:06:38 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:01.640 14:06:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:01.640 14:06:38 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:01.640 14:06:38 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:01.640 14:06:38 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.640 14:06:38 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:01.899 14:06:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:01.899 14:06:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:01.899 14:06:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:01.899 14:06:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:01.899 14:06:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:01.899 14:06:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:01.899 14:06:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:01.899 14:06:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:01.899 14:06:39 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:01.899 14:06:39 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:02.157 14:06:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:02.157 14:06:39 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:02.157 14:06:39 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:02.157 14:06:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:02.157 14:06:39 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:02.157 14:06:39 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:02.157 14:06:39 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:02.157 14:06:39 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:02.157 14:06:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:02.157 14:06:39 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:02.157 14:06:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:02.415 14:06:39 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:02.415 14:06:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:02.415 14:06:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:02.674 14:06:39 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:02.674 14:06:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:02.674 14:06:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:02.674 14:06:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:02.674 14:06:39 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:02.674 14:06:39 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:02.674 14:06:39 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:02.674 14:06:39 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:02.674 14:06:39 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:02.674 14:06:39 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:02.933 14:06:40 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:04.309 [2024-11-27 14:06:41.268470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:04.309 [2024-11-27 14:06:41.397076] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:04.309 [2024-11-27 14:06:41.397096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.568 [2024-11-27 14:06:41.588377] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:04.568 [2024-11-27 14:06:41.588476] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:05.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:05.945 14:06:43 event.app_repeat -- event/event.sh@38 -- # waitforlisten 58178 /var/tmp/spdk-nbd.sock 00:07:05.945 14:06:43 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 58178 ']' 00:07:05.945 14:06:43 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:05.945 14:06:43 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:05.945 14:06:43 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:05.945 14:06:43 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:05.945 14:06:43 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:06.203 14:06:43 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.203 14:06:43 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:06.203 14:06:43 event.app_repeat -- event/event.sh@39 -- # killprocess 58178 00:07:06.203 14:06:43 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 58178 ']' 00:07:06.203 14:06:43 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 58178 00:07:06.203 14:06:43 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:06.203 14:06:43 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:06.203 14:06:43 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58178 00:07:06.463 killing process with pid 58178 00:07:06.463 14:06:43 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:06.463 14:06:43 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:06.463 14:06:43 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58178' 00:07:06.463 14:06:43 event.app_repeat -- common/autotest_common.sh@973 -- # kill 58178 00:07:06.463 14:06:43 event.app_repeat -- common/autotest_common.sh@978 -- # wait 58178 00:07:07.399 spdk_app_start is called in Round 0. 00:07:07.399 Shutdown signal received, stop current app iteration 00:07:07.399 Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 reinitialization... 00:07:07.399 spdk_app_start is called in Round 1. 00:07:07.399 Shutdown signal received, stop current app iteration 00:07:07.399 Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 reinitialization... 00:07:07.399 spdk_app_start is called in Round 2. 00:07:07.399 Shutdown signal received, stop current app iteration 00:07:07.399 Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 reinitialization... 00:07:07.399 spdk_app_start is called in Round 3. 00:07:07.399 Shutdown signal received, stop current app iteration 00:07:07.399 ************************************ 00:07:07.399 END TEST app_repeat 00:07:07.399 ************************************ 00:07:07.399 14:06:44 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:07.399 14:06:44 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:07.399 00:07:07.399 real 0m22.253s 00:07:07.399 user 0m49.569s 00:07:07.399 sys 0m3.235s 00:07:07.399 14:06:44 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.399 14:06:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:07.399 14:06:44 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:07.399 14:06:44 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:07.399 14:06:44 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.399 14:06:44 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.399 14:06:44 event -- common/autotest_common.sh@10 -- # set +x 00:07:07.399 ************************************ 00:07:07.399 START TEST cpu_locks 00:07:07.399 ************************************ 00:07:07.399 14:06:44 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:07:07.399 * Looking for test storage... 00:07:07.399 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:07:07.399 14:06:44 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:07.399 14:06:44 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:07:07.399 14:06:44 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:07.659 14:06:44 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:07.659 14:06:44 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.659 14:06:44 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.659 14:06:44 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.659 14:06:44 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.659 14:06:44 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.659 14:06:44 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.659 14:06:44 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.659 14:06:44 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.659 14:06:44 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.659 14:06:44 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.659 14:06:44 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.659 14:06:44 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:07.659 14:06:44 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:07.659 14:06:44 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.659 14:06:44 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.659 14:06:44 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:07.659 14:06:44 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:07.659 14:06:44 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.659 14:06:44 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:07.659 14:06:44 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.659 14:06:44 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:07.659 14:06:44 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:07.659 14:06:44 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.659 14:06:44 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:07.659 14:06:44 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.659 14:06:44 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.659 14:06:44 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.659 14:06:44 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:07.659 14:06:44 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.659 14:06:44 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:07.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.659 --rc genhtml_branch_coverage=1 00:07:07.659 --rc genhtml_function_coverage=1 00:07:07.659 --rc genhtml_legend=1 00:07:07.659 --rc geninfo_all_blocks=1 00:07:07.659 --rc geninfo_unexecuted_blocks=1 00:07:07.659 00:07:07.659 ' 00:07:07.659 14:06:44 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:07.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.659 --rc genhtml_branch_coverage=1 00:07:07.659 --rc genhtml_function_coverage=1 00:07:07.659 --rc genhtml_legend=1 00:07:07.659 --rc geninfo_all_blocks=1 00:07:07.659 --rc geninfo_unexecuted_blocks=1 00:07:07.659 00:07:07.659 ' 00:07:07.659 14:06:44 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:07.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.659 --rc genhtml_branch_coverage=1 00:07:07.659 --rc genhtml_function_coverage=1 00:07:07.659 --rc genhtml_legend=1 00:07:07.659 --rc geninfo_all_blocks=1 00:07:07.659 --rc geninfo_unexecuted_blocks=1 00:07:07.659 00:07:07.659 ' 00:07:07.659 14:06:44 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:07.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.659 --rc genhtml_branch_coverage=1 00:07:07.659 --rc genhtml_function_coverage=1 00:07:07.659 --rc genhtml_legend=1 00:07:07.659 --rc geninfo_all_blocks=1 00:07:07.659 --rc geninfo_unexecuted_blocks=1 00:07:07.659 00:07:07.659 ' 00:07:07.659 14:06:44 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:07.659 14:06:44 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:07.659 14:06:44 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:07.659 14:06:44 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:07.659 14:06:44 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:07.659 14:06:44 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.659 14:06:44 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.659 ************************************ 00:07:07.659 START TEST default_locks 00:07:07.659 ************************************ 00:07:07.659 14:06:44 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:07.659 14:06:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=58658 00:07:07.659 14:06:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:07.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:07.659 14:06:44 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 58658 00:07:07.659 14:06:44 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58658 ']' 00:07:07.659 14:06:44 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:07.659 14:06:44 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:07.659 14:06:44 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:07.659 14:06:44 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:07.659 14:06:44 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:07.659 [2024-11-27 14:06:44.852634] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:07:07.660 [2024-11-27 14:06:44.852814] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58658 ] 00:07:07.919 [2024-11-27 14:06:45.024398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.919 [2024-11-27 14:06:45.154992] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.855 14:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:08.855 14:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:08.855 14:06:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 58658 00:07:08.855 14:06:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 58658 00:07:08.855 14:06:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:09.421 14:06:46 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 58658 00:07:09.421 14:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 58658 ']' 00:07:09.421 14:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 58658 00:07:09.421 14:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:09.421 14:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:09.421 14:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58658 00:07:09.421 killing process with pid 58658 00:07:09.421 14:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:09.421 14:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:09.421 14:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58658' 00:07:09.421 14:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 58658 00:07:09.421 14:06:46 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 58658 00:07:11.976 14:06:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 58658 00:07:11.976 14:06:48 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:11.976 14:06:48 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 58658 00:07:11.976 14:06:48 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:11.976 14:06:48 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.976 14:06:48 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:11.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.976 ERROR: process (pid: 58658) is no longer running 00:07:11.976 14:06:48 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:11.976 14:06:48 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 58658 00:07:11.976 14:06:48 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 58658 ']' 00:07:11.976 14:06:48 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.976 14:06:48 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.976 14:06:48 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.976 14:06:48 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.976 14:06:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.976 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (58658) - No such process 00:07:11.976 14:06:48 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.976 14:06:48 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:11.976 14:06:48 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:11.976 14:06:48 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:11.976 14:06:48 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:11.976 14:06:48 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:11.976 14:06:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:11.976 14:06:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:11.976 14:06:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:11.976 14:06:48 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:11.976 00:07:11.976 real 0m4.053s 00:07:11.976 user 0m4.219s 00:07:11.976 sys 0m0.742s 00:07:11.976 14:06:48 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.976 14:06:48 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.976 ************************************ 00:07:11.976 END TEST default_locks 00:07:11.976 ************************************ 00:07:11.976 14:06:48 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:11.976 14:06:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.976 14:06:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.976 14:06:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:11.976 ************************************ 00:07:11.976 START TEST default_locks_via_rpc 00:07:11.976 ************************************ 00:07:11.976 14:06:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:11.976 14:06:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=58733 00:07:11.976 14:06:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 58733 00:07:11.976 14:06:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 58733 ']' 00:07:11.976 14:06:48 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:11.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.976 14:06:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.976 14:06:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.976 14:06:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.976 14:06:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.976 14:06:48 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:11.976 [2024-11-27 14:06:48.949260] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:07:11.976 [2024-11-27 14:06:48.949419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58733 ] 00:07:11.976 [2024-11-27 14:06:49.128392] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.234 [2024-11-27 14:06:49.305402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.172 14:06:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:13.172 14:06:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:13.172 14:06:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:13.172 14:06:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.172 14:06:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.172 14:06:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.172 14:06:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:13.172 14:06:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:13.172 14:06:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:13.172 14:06:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:13.172 14:06:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:13.172 14:06:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:13.172 14:06:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:13.172 14:06:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:13.172 14:06:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 58733 00:07:13.172 14:06:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 58733 00:07:13.172 14:06:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:13.430 14:06:50 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 58733 00:07:13.430 14:06:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 58733 ']' 00:07:13.430 14:06:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 58733 00:07:13.430 14:06:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:13.430 14:06:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:13.430 14:06:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58733 00:07:13.689 killing process with pid 58733 00:07:13.689 14:06:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:13.689 14:06:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:13.689 14:06:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58733' 00:07:13.689 14:06:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 58733 00:07:13.689 14:06:50 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 58733 00:07:16.342 ************************************ 00:07:16.342 END TEST default_locks_via_rpc 00:07:16.342 ************************************ 00:07:16.342 00:07:16.342 real 0m4.153s 00:07:16.342 user 0m4.205s 00:07:16.342 sys 0m0.746s 00:07:16.342 14:06:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:16.342 14:06:52 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:16.342 14:06:53 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:16.342 14:06:53 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:16.342 14:06:53 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:16.342 14:06:53 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:16.342 ************************************ 00:07:16.342 START TEST non_locking_app_on_locked_coremask 00:07:16.342 ************************************ 00:07:16.342 14:06:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:16.342 14:06:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=58807 00:07:16.342 14:06:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:16.342 14:06:53 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 58807 /var/tmp/spdk.sock 00:07:16.342 14:06:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58807 ']' 00:07:16.342 14:06:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:16.342 14:06:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:16.342 14:06:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:16.342 14:06:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.342 14:06:53 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:16.342 [2024-11-27 14:06:53.178328] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:07:16.342 [2024-11-27 14:06:53.178730] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58807 ] 00:07:16.342 [2024-11-27 14:06:53.353900] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.342 [2024-11-27 14:06:53.486368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:17.279 14:06:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:17.279 14:06:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:17.279 14:06:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=58823 00:07:17.279 14:06:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:17.279 14:06:54 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 58823 /var/tmp/spdk2.sock 00:07:17.279 14:06:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58823 ']' 00:07:17.279 14:06:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:17.279 14:06:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:17.279 14:06:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:17.279 14:06:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:17.280 14:06:54 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:17.280 [2024-11-27 14:06:54.466743] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:07:17.280 [2024-11-27 14:06:54.467205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58823 ] 00:07:17.538 [2024-11-27 14:06:54.663177] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:17.538 [2024-11-27 14:06:54.663252] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:17.797 [2024-11-27 14:06:54.928681] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:20.330 14:06:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:20.330 14:06:57 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:20.330 14:06:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 58807 00:07:20.330 14:06:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:20.330 14:06:57 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58807 00:07:20.899 14:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 58807 00:07:20.899 14:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58807 ']' 00:07:20.899 14:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58807 00:07:20.899 14:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:20.899 14:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:20.899 14:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58807 00:07:20.899 killing process with pid 58807 00:07:20.899 14:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:20.899 14:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:20.899 14:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58807' 00:07:20.899 14:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58807 00:07:20.899 14:06:58 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58807 00:07:26.176 14:07:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 58823 00:07:26.176 14:07:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58823 ']' 00:07:26.176 14:07:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 58823 00:07:26.176 14:07:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:26.176 14:07:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.176 14:07:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58823 00:07:26.176 killing process with pid 58823 00:07:26.176 14:07:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:26.176 14:07:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:26.176 14:07:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58823' 00:07:26.176 14:07:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 58823 00:07:26.176 14:07:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 58823 00:07:28.078 00:07:28.078 real 0m11.809s 00:07:28.078 user 0m12.474s 00:07:28.078 sys 0m1.442s 00:07:28.078 14:07:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.078 ************************************ 00:07:28.078 END TEST non_locking_app_on_locked_coremask 00:07:28.078 ************************************ 00:07:28.078 14:07:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:28.078 14:07:04 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:28.078 14:07:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.078 14:07:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.078 14:07:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:28.078 ************************************ 00:07:28.078 START TEST locking_app_on_unlocked_coremask 00:07:28.078 ************************************ 00:07:28.078 14:07:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:28.078 14:07:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=58980 00:07:28.078 14:07:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:28.078 14:07:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 58980 /var/tmp/spdk.sock 00:07:28.078 14:07:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58980 ']' 00:07:28.078 14:07:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.078 14:07:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.078 14:07:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.078 14:07:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.078 14:07:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:28.078 [2024-11-27 14:07:05.044283] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:07:28.078 [2024-11-27 14:07:05.044460] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58980 ] 00:07:28.078 [2024-11-27 14:07:05.226475] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:28.078 [2024-11-27 14:07:05.226572] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.456 [2024-11-27 14:07:05.383581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.029 14:07:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.029 14:07:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:29.029 14:07:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=58996 00:07:29.029 14:07:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:29.029 14:07:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 58996 /var/tmp/spdk2.sock 00:07:29.029 14:07:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 58996 ']' 00:07:29.029 14:07:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:29.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:29.029 14:07:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.029 14:07:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:29.029 14:07:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.029 14:07:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:29.289 [2024-11-27 14:07:06.368510] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:07:29.289 [2024-11-27 14:07:06.368650] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58996 ] 00:07:29.289 [2024-11-27 14:07:06.563546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.548 [2024-11-27 14:07:06.815965] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.082 14:07:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.082 14:07:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:32.082 14:07:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 58996 00:07:32.082 14:07:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 58996 00:07:32.082 14:07:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:33.019 14:07:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 58980 00:07:33.019 14:07:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58980 ']' 00:07:33.019 14:07:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58980 00:07:33.019 14:07:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:33.019 14:07:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.019 14:07:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58980 00:07:33.019 14:07:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:33.019 killing process with pid 58980 00:07:33.019 14:07:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:33.019 14:07:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58980' 00:07:33.019 14:07:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58980 00:07:33.019 14:07:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58980 00:07:37.233 14:07:14 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 58996 00:07:37.233 14:07:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 58996 ']' 00:07:37.233 14:07:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 58996 00:07:37.233 14:07:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:37.233 14:07:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:37.233 14:07:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58996 00:07:37.233 14:07:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:37.233 killing process with pid 58996 00:07:37.233 14:07:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:37.233 14:07:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58996' 00:07:37.233 14:07:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 58996 00:07:37.233 14:07:14 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 58996 00:07:39.784 00:07:39.784 real 0m11.790s 00:07:39.784 user 0m12.351s 00:07:39.784 sys 0m1.519s 00:07:39.784 ************************************ 00:07:39.784 END TEST locking_app_on_unlocked_coremask 00:07:39.784 ************************************ 00:07:39.784 14:07:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:39.784 14:07:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:39.784 14:07:16 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:39.784 14:07:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:39.784 14:07:16 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:39.784 14:07:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:39.784 ************************************ 00:07:39.784 START TEST locking_app_on_locked_coremask 00:07:39.784 ************************************ 00:07:39.784 14:07:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:39.784 14:07:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=59146 00:07:39.784 14:07:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:07:39.784 14:07:16 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 59146 /var/tmp/spdk.sock 00:07:39.784 14:07:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59146 ']' 00:07:39.784 14:07:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.784 14:07:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.784 14:07:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.784 14:07:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.784 14:07:16 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:39.784 [2024-11-27 14:07:16.865982] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:07:39.784 [2024-11-27 14:07:16.866167] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59146 ] 00:07:39.784 [2024-11-27 14:07:17.049681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.041 [2024-11-27 14:07:17.187636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.979 14:07:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.979 14:07:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:40.979 14:07:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=59173 00:07:40.979 14:07:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 59173 /var/tmp/spdk2.sock 00:07:40.979 14:07:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:40.979 14:07:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:40.979 14:07:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59173 /var/tmp/spdk2.sock 00:07:40.979 14:07:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:40.979 14:07:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:40.979 14:07:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:40.979 14:07:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:40.979 14:07:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59173 /var/tmp/spdk2.sock 00:07:40.979 14:07:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 59173 ']' 00:07:40.979 14:07:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:40.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:40.979 14:07:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:40.979 14:07:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:40.979 14:07:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:40.979 14:07:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:40.979 [2024-11-27 14:07:18.184618] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:07:40.979 [2024-11-27 14:07:18.184802] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59173 ] 00:07:41.238 [2024-11-27 14:07:18.376816] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 59146 has claimed it. 00:07:41.238 [2024-11-27 14:07:18.376939] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:41.804 ERROR: process (pid: 59173) is no longer running 00:07:41.804 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59173) - No such process 00:07:41.804 14:07:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:41.804 14:07:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:41.804 14:07:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:41.804 14:07:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:41.804 14:07:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:41.804 14:07:18 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:41.804 14:07:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 59146 00:07:41.804 14:07:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 59146 00:07:41.804 14:07:18 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:42.063 14:07:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 59146 00:07:42.063 14:07:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 59146 ']' 00:07:42.063 14:07:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 59146 00:07:42.063 14:07:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:42.063 14:07:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:42.063 14:07:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59146 00:07:42.063 14:07:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:42.063 killing process with pid 59146 00:07:42.063 14:07:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:42.063 14:07:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59146' 00:07:42.063 14:07:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 59146 00:07:42.063 14:07:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 59146 00:07:44.601 00:07:44.601 real 0m4.798s 00:07:44.601 user 0m5.147s 00:07:44.601 sys 0m0.869s 00:07:44.601 14:07:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:44.601 14:07:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:44.601 ************************************ 00:07:44.601 END TEST locking_app_on_locked_coremask 00:07:44.602 ************************************ 00:07:44.602 14:07:21 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:44.602 14:07:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:44.602 14:07:21 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:44.602 14:07:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:44.602 ************************************ 00:07:44.602 START TEST locking_overlapped_coremask 00:07:44.602 ************************************ 00:07:44.602 14:07:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:44.602 14:07:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=59237 00:07:44.602 14:07:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 59237 /var/tmp/spdk.sock 00:07:44.602 14:07:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59237 ']' 00:07:44.602 14:07:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:07:44.602 14:07:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:44.602 14:07:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:44.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:44.602 14:07:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:44.602 14:07:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:44.602 14:07:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:44.602 [2024-11-27 14:07:21.714738] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:07:44.602 [2024-11-27 14:07:21.715447] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59237 ] 00:07:44.861 [2024-11-27 14:07:21.906588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:44.861 [2024-11-27 14:07:22.077034] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:44.861 [2024-11-27 14:07:22.077132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:44.861 [2024-11-27 14:07:22.077135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.799 14:07:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:45.799 14:07:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:45.799 14:07:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=59255 00:07:45.799 14:07:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:45.799 14:07:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 59255 /var/tmp/spdk2.sock 00:07:45.799 14:07:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:45.799 14:07:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 59255 /var/tmp/spdk2.sock 00:07:45.799 14:07:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:45.799 14:07:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:45.799 14:07:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:45.799 14:07:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:45.799 14:07:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 59255 /var/tmp/spdk2.sock 00:07:45.799 14:07:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 59255 ']' 00:07:45.799 14:07:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:45.799 14:07:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:45.799 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:45.799 14:07:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:45.799 14:07:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:45.799 14:07:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:46.059 [2024-11-27 14:07:23.127590] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:07:46.059 [2024-11-27 14:07:23.127819] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59255 ] 00:07:46.317 [2024-11-27 14:07:23.338482] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59237 has claimed it. 00:07:46.317 [2024-11-27 14:07:23.338586] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:46.576 ERROR: process (pid: 59255) is no longer running 00:07:46.576 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (59255) - No such process 00:07:46.576 14:07:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:46.576 14:07:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:46.576 14:07:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:46.576 14:07:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:46.576 14:07:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:46.577 14:07:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:46.577 14:07:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:46.577 14:07:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:46.577 14:07:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:46.577 14:07:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:46.577 14:07:23 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 59237 00:07:46.577 14:07:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 59237 ']' 00:07:46.577 14:07:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 59237 00:07:46.577 14:07:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:46.577 14:07:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:46.577 14:07:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59237 00:07:46.577 14:07:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:46.577 killing process with pid 59237 00:07:46.577 14:07:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:46.577 14:07:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59237' 00:07:46.577 14:07:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 59237 00:07:46.577 14:07:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 59237 00:07:49.140 00:07:49.140 real 0m4.473s 00:07:49.140 user 0m12.103s 00:07:49.140 sys 0m0.730s 00:07:49.140 14:07:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:49.140 14:07:26 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:49.140 ************************************ 00:07:49.140 END TEST locking_overlapped_coremask 00:07:49.140 ************************************ 00:07:49.140 14:07:26 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:49.140 14:07:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:49.140 14:07:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:49.141 14:07:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:49.141 ************************************ 00:07:49.141 START TEST locking_overlapped_coremask_via_rpc 00:07:49.141 ************************************ 00:07:49.141 14:07:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:49.141 14:07:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=59319 00:07:49.141 14:07:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 59319 /var/tmp/spdk.sock 00:07:49.141 14:07:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59319 ']' 00:07:49.141 14:07:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:49.141 14:07:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:49.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:49.141 14:07:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:49.141 14:07:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:49.141 14:07:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:49.141 14:07:26 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:49.141 [2024-11-27 14:07:26.211693] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:07:49.141 [2024-11-27 14:07:26.211898] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59319 ] 00:07:49.141 [2024-11-27 14:07:26.383988] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:49.141 [2024-11-27 14:07:26.384058] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:49.399 [2024-11-27 14:07:26.509108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:49.399 [2024-11-27 14:07:26.509274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:49.399 [2024-11-27 14:07:26.509282] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.335 14:07:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.335 14:07:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:50.335 14:07:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=59348 00:07:50.335 14:07:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 59348 /var/tmp/spdk2.sock 00:07:50.335 14:07:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59348 ']' 00:07:50.335 14:07:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:50.335 14:07:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:50.335 14:07:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.335 14:07:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:50.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:50.335 14:07:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.335 14:07:27 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:50.335 [2024-11-27 14:07:27.530606] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:07:50.335 [2024-11-27 14:07:27.531239] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59348 ] 00:07:50.594 [2024-11-27 14:07:27.725988] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:50.594 [2024-11-27 14:07:27.726055] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:50.853 [2024-11-27 14:07:27.990459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.853 [2024-11-27 14:07:27.990565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.853 [2024-11-27 14:07:27.990588] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.387 [2024-11-27 14:07:30.351963] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 59319 has claimed it. 00:07:53.387 request: 00:07:53.387 { 00:07:53.387 "method": "framework_enable_cpumask_locks", 00:07:53.387 "req_id": 1 00:07:53.387 } 00:07:53.387 Got JSON-RPC error response 00:07:53.387 response: 00:07:53.387 { 00:07:53.387 "code": -32603, 00:07:53.387 "message": "Failed to claim CPU core: 2" 00:07:53.387 } 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 59319 /var/tmp/spdk.sock 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59319 ']' 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 59348 /var/tmp/spdk2.sock 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 59348 ']' 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:53.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.387 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.955 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.955 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:53.955 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:53.955 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:53.955 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:53.955 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:53.955 00:07:53.955 real 0m4.879s 00:07:53.955 user 0m1.805s 00:07:53.955 sys 0m0.236s 00:07:53.955 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:53.955 14:07:30 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:53.955 ************************************ 00:07:53.955 END TEST locking_overlapped_coremask_via_rpc 00:07:53.955 ************************************ 00:07:53.955 14:07:31 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:53.955 14:07:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59319 ]] 00:07:53.955 14:07:31 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59319 00:07:53.955 14:07:31 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59319 ']' 00:07:53.955 14:07:31 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59319 00:07:53.955 14:07:31 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:53.955 14:07:31 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:53.955 14:07:31 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59319 00:07:53.955 14:07:31 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:53.955 14:07:31 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:53.955 killing process with pid 59319 00:07:53.955 14:07:31 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59319' 00:07:53.955 14:07:31 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59319 00:07:53.955 14:07:31 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59319 00:07:56.487 14:07:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59348 ]] 00:07:56.487 14:07:33 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59348 00:07:56.487 14:07:33 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59348 ']' 00:07:56.487 14:07:33 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59348 00:07:56.487 14:07:33 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:56.487 14:07:33 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:56.487 14:07:33 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59348 00:07:56.487 14:07:33 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:56.487 14:07:33 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:56.487 killing process with pid 59348 00:07:56.487 14:07:33 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59348' 00:07:56.487 14:07:33 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 59348 00:07:56.487 14:07:33 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 59348 00:07:58.401 14:07:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:58.401 14:07:35 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:58.401 14:07:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 59319 ]] 00:07:58.401 14:07:35 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 59319 00:07:58.401 14:07:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59319 ']' 00:07:58.401 14:07:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59319 00:07:58.401 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59319) - No such process 00:07:58.401 Process with pid 59319 is not found 00:07:58.401 14:07:35 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59319 is not found' 00:07:58.401 14:07:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 59348 ]] 00:07:58.401 14:07:35 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 59348 00:07:58.401 14:07:35 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 59348 ']' 00:07:58.401 14:07:35 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 59348 00:07:58.401 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (59348) - No such process 00:07:58.401 Process with pid 59348 is not found 00:07:58.401 14:07:35 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 59348 is not found' 00:07:58.401 14:07:35 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:58.401 00:07:58.401 real 0m51.057s 00:07:58.401 user 1m28.965s 00:07:58.401 sys 0m7.515s 00:07:58.401 14:07:35 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.401 14:07:35 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:58.401 ************************************ 00:07:58.401 END TEST cpu_locks 00:07:58.401 ************************************ 00:07:58.401 00:07:58.401 real 1m24.410s 00:07:58.401 user 2m35.823s 00:07:58.401 sys 0m11.842s 00:07:58.401 14:07:35 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:58.401 14:07:35 event -- common/autotest_common.sh@10 -- # set +x 00:07:58.401 ************************************ 00:07:58.401 END TEST event 00:07:58.401 ************************************ 00:07:58.659 14:07:35 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:58.659 14:07:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:58.659 14:07:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.659 14:07:35 -- common/autotest_common.sh@10 -- # set +x 00:07:58.659 ************************************ 00:07:58.659 START TEST thread 00:07:58.659 ************************************ 00:07:58.659 14:07:35 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:07:58.659 * Looking for test storage... 00:07:58.659 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:07:58.659 14:07:35 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:58.659 14:07:35 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:58.659 14:07:35 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:58.659 14:07:35 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:58.659 14:07:35 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:58.659 14:07:35 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:58.659 14:07:35 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:58.659 14:07:35 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:58.659 14:07:35 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:58.659 14:07:35 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:58.659 14:07:35 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:58.659 14:07:35 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:58.659 14:07:35 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:58.659 14:07:35 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:58.659 14:07:35 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:58.659 14:07:35 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:58.659 14:07:35 thread -- scripts/common.sh@345 -- # : 1 00:07:58.659 14:07:35 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:58.659 14:07:35 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:58.659 14:07:35 thread -- scripts/common.sh@365 -- # decimal 1 00:07:58.659 14:07:35 thread -- scripts/common.sh@353 -- # local d=1 00:07:58.659 14:07:35 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:58.659 14:07:35 thread -- scripts/common.sh@355 -- # echo 1 00:07:58.659 14:07:35 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:58.659 14:07:35 thread -- scripts/common.sh@366 -- # decimal 2 00:07:58.659 14:07:35 thread -- scripts/common.sh@353 -- # local d=2 00:07:58.659 14:07:35 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:58.659 14:07:35 thread -- scripts/common.sh@355 -- # echo 2 00:07:58.659 14:07:35 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:58.659 14:07:35 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:58.659 14:07:35 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:58.659 14:07:35 thread -- scripts/common.sh@368 -- # return 0 00:07:58.659 14:07:35 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:58.659 14:07:35 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:58.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.659 --rc genhtml_branch_coverage=1 00:07:58.659 --rc genhtml_function_coverage=1 00:07:58.659 --rc genhtml_legend=1 00:07:58.659 --rc geninfo_all_blocks=1 00:07:58.659 --rc geninfo_unexecuted_blocks=1 00:07:58.659 00:07:58.659 ' 00:07:58.659 14:07:35 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:58.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.659 --rc genhtml_branch_coverage=1 00:07:58.659 --rc genhtml_function_coverage=1 00:07:58.659 --rc genhtml_legend=1 00:07:58.659 --rc geninfo_all_blocks=1 00:07:58.659 --rc geninfo_unexecuted_blocks=1 00:07:58.659 00:07:58.659 ' 00:07:58.659 14:07:35 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:58.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.659 --rc genhtml_branch_coverage=1 00:07:58.659 --rc genhtml_function_coverage=1 00:07:58.659 --rc genhtml_legend=1 00:07:58.659 --rc geninfo_all_blocks=1 00:07:58.659 --rc geninfo_unexecuted_blocks=1 00:07:58.659 00:07:58.659 ' 00:07:58.659 14:07:35 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:58.659 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:58.659 --rc genhtml_branch_coverage=1 00:07:58.659 --rc genhtml_function_coverage=1 00:07:58.659 --rc genhtml_legend=1 00:07:58.659 --rc geninfo_all_blocks=1 00:07:58.659 --rc geninfo_unexecuted_blocks=1 00:07:58.659 00:07:58.659 ' 00:07:58.659 14:07:35 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:58.659 14:07:35 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:58.659 14:07:35 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:58.659 14:07:35 thread -- common/autotest_common.sh@10 -- # set +x 00:07:58.659 ************************************ 00:07:58.659 START TEST thread_poller_perf 00:07:58.659 ************************************ 00:07:58.659 14:07:35 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:58.659 [2024-11-27 14:07:35.928325] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:07:58.659 [2024-11-27 14:07:35.928494] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59543 ] 00:07:58.918 [2024-11-27 14:07:36.122827] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.177 [2024-11-27 14:07:36.279601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.177 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:08:00.553 [2024-11-27T14:07:37.831Z] ====================================== 00:08:00.553 [2024-11-27T14:07:37.831Z] busy:2213728372 (cyc) 00:08:00.553 [2024-11-27T14:07:37.831Z] total_run_count: 301000 00:08:00.553 [2024-11-27T14:07:37.831Z] tsc_hz: 2200000000 (cyc) 00:08:00.553 [2024-11-27T14:07:37.831Z] ====================================== 00:08:00.553 [2024-11-27T14:07:37.831Z] poller_cost: 7354 (cyc), 3342 (nsec) 00:08:00.553 00:08:00.553 real 0m1.645s 00:08:00.553 user 0m1.432s 00:08:00.553 sys 0m0.103s 00:08:00.553 14:07:37 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:00.553 14:07:37 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:00.553 ************************************ 00:08:00.553 END TEST thread_poller_perf 00:08:00.553 ************************************ 00:08:00.553 14:07:37 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:00.553 14:07:37 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:08:00.553 14:07:37 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:00.553 14:07:37 thread -- common/autotest_common.sh@10 -- # set +x 00:08:00.553 ************************************ 00:08:00.553 START TEST thread_poller_perf 00:08:00.553 ************************************ 00:08:00.553 14:07:37 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:08:00.553 [2024-11-27 14:07:37.622573] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:08:00.553 [2024-11-27 14:07:37.622788] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59580 ] 00:08:00.553 [2024-11-27 14:07:37.806774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.811 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:08:00.811 [2024-11-27 14:07:37.936003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.187 [2024-11-27T14:07:39.465Z] ====================================== 00:08:02.187 [2024-11-27T14:07:39.465Z] busy:2204058652 (cyc) 00:08:02.187 [2024-11-27T14:07:39.465Z] total_run_count: 3934000 00:08:02.187 [2024-11-27T14:07:39.465Z] tsc_hz: 2200000000 (cyc) 00:08:02.187 [2024-11-27T14:07:39.465Z] ====================================== 00:08:02.187 [2024-11-27T14:07:39.465Z] poller_cost: 560 (cyc), 254 (nsec) 00:08:02.187 00:08:02.187 real 0m1.584s 00:08:02.187 user 0m1.366s 00:08:02.187 sys 0m0.110s 00:08:02.187 14:07:39 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.187 ************************************ 00:08:02.187 END TEST thread_poller_perf 00:08:02.187 ************************************ 00:08:02.187 14:07:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:08:02.187 14:07:39 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:08:02.187 ************************************ 00:08:02.187 END TEST thread 00:08:02.187 ************************************ 00:08:02.187 00:08:02.187 real 0m3.510s 00:08:02.187 user 0m2.955s 00:08:02.187 sys 0m0.340s 00:08:02.187 14:07:39 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.187 14:07:39 thread -- common/autotest_common.sh@10 -- # set +x 00:08:02.187 14:07:39 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:08:02.187 14:07:39 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:02.187 14:07:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:02.187 14:07:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.187 14:07:39 -- common/autotest_common.sh@10 -- # set +x 00:08:02.187 ************************************ 00:08:02.187 START TEST app_cmdline 00:08:02.188 ************************************ 00:08:02.188 14:07:39 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:08:02.188 * Looking for test storage... 00:08:02.188 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:02.188 14:07:39 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:02.188 14:07:39 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:08:02.188 14:07:39 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:02.188 14:07:39 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:02.188 14:07:39 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:02.188 14:07:39 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:02.188 14:07:39 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:02.188 14:07:39 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.188 14:07:39 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:08:02.188 14:07:39 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:08:02.188 14:07:39 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:08:02.188 14:07:39 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:08:02.188 14:07:39 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:08:02.188 14:07:39 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:08:02.188 14:07:39 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:02.188 14:07:39 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:08:02.188 14:07:39 app_cmdline -- scripts/common.sh@345 -- # : 1 00:08:02.188 14:07:39 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:02.188 14:07:39 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.188 14:07:39 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:08:02.188 14:07:39 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:08:02.188 14:07:39 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.188 14:07:39 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:08:02.188 14:07:39 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:08:02.188 14:07:39 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:08:02.188 14:07:39 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:08:02.188 14:07:39 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.188 14:07:39 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:08:02.188 14:07:39 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:08:02.188 14:07:39 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:02.188 14:07:39 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:02.188 14:07:39 app_cmdline -- scripts/common.sh@368 -- # return 0 00:08:02.188 14:07:39 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.188 14:07:39 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:02.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.188 --rc genhtml_branch_coverage=1 00:08:02.188 --rc genhtml_function_coverage=1 00:08:02.188 --rc genhtml_legend=1 00:08:02.188 --rc geninfo_all_blocks=1 00:08:02.188 --rc geninfo_unexecuted_blocks=1 00:08:02.188 00:08:02.188 ' 00:08:02.188 14:07:39 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:02.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.188 --rc genhtml_branch_coverage=1 00:08:02.188 --rc genhtml_function_coverage=1 00:08:02.188 --rc genhtml_legend=1 00:08:02.188 --rc geninfo_all_blocks=1 00:08:02.188 --rc geninfo_unexecuted_blocks=1 00:08:02.188 00:08:02.188 ' 00:08:02.188 14:07:39 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:02.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.188 --rc genhtml_branch_coverage=1 00:08:02.188 --rc genhtml_function_coverage=1 00:08:02.188 --rc genhtml_legend=1 00:08:02.188 --rc geninfo_all_blocks=1 00:08:02.188 --rc geninfo_unexecuted_blocks=1 00:08:02.188 00:08:02.188 ' 00:08:02.188 14:07:39 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:02.188 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.188 --rc genhtml_branch_coverage=1 00:08:02.188 --rc genhtml_function_coverage=1 00:08:02.188 --rc genhtml_legend=1 00:08:02.188 --rc geninfo_all_blocks=1 00:08:02.188 --rc geninfo_unexecuted_blocks=1 00:08:02.188 00:08:02.188 ' 00:08:02.188 14:07:39 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:08:02.188 14:07:39 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=59663 00:08:02.188 14:07:39 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 59663 00:08:02.188 14:07:39 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:08:02.188 14:07:39 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 59663 ']' 00:08:02.188 14:07:39 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.188 14:07:39 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:02.188 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.188 14:07:39 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.188 14:07:39 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:02.188 14:07:39 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:02.446 [2024-11-27 14:07:39.549825] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:08:02.446 [2024-11-27 14:07:39.550023] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59663 ] 00:08:02.704 [2024-11-27 14:07:39.741381] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:02.704 [2024-11-27 14:07:39.899213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.640 14:07:40 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.640 14:07:40 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:08:03.640 14:07:40 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:08:03.899 { 00:08:03.900 "version": "SPDK v25.01-pre git sha1 38b931b23", 00:08:03.900 "fields": { 00:08:03.900 "major": 25, 00:08:03.900 "minor": 1, 00:08:03.900 "patch": 0, 00:08:03.900 "suffix": "-pre", 00:08:03.900 "commit": "38b931b23" 00:08:03.900 } 00:08:03.900 } 00:08:03.900 14:07:41 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:08:03.900 14:07:41 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:08:03.900 14:07:41 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:08:03.900 14:07:41 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:08:03.900 14:07:41 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:08:03.900 14:07:41 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:03.900 14:07:41 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:08:03.900 14:07:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:03.900 14:07:41 app_cmdline -- app/cmdline.sh@26 -- # sort 00:08:03.900 14:07:41 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:03.900 14:07:41 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:08:03.900 14:07:41 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:08:03.900 14:07:41 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:03.900 14:07:41 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:08:03.900 14:07:41 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:03.900 14:07:41 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:03.900 14:07:41 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.900 14:07:41 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:03.900 14:07:41 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.900 14:07:41 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:03.900 14:07:41 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:03.900 14:07:41 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:03.900 14:07:41 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:03.900 14:07:41 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:08:04.159 request: 00:08:04.159 { 00:08:04.159 "method": "env_dpdk_get_mem_stats", 00:08:04.159 "req_id": 1 00:08:04.159 } 00:08:04.159 Got JSON-RPC error response 00:08:04.159 response: 00:08:04.159 { 00:08:04.159 "code": -32601, 00:08:04.159 "message": "Method not found" 00:08:04.159 } 00:08:04.159 14:07:41 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:08:04.159 14:07:41 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:04.159 14:07:41 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:04.159 14:07:41 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:04.159 14:07:41 app_cmdline -- app/cmdline.sh@1 -- # killprocess 59663 00:08:04.159 14:07:41 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 59663 ']' 00:08:04.159 14:07:41 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 59663 00:08:04.159 14:07:41 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:08:04.159 14:07:41 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:04.159 14:07:41 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59663 00:08:04.419 14:07:41 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:04.419 14:07:41 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:04.419 killing process with pid 59663 00:08:04.419 14:07:41 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59663' 00:08:04.419 14:07:41 app_cmdline -- common/autotest_common.sh@973 -- # kill 59663 00:08:04.419 14:07:41 app_cmdline -- common/autotest_common.sh@978 -- # wait 59663 00:08:06.952 00:08:06.952 real 0m4.382s 00:08:06.952 user 0m4.805s 00:08:06.952 sys 0m0.714s 00:08:06.952 14:07:43 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.952 ************************************ 00:08:06.952 END TEST app_cmdline 00:08:06.952 ************************************ 00:08:06.952 14:07:43 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:08:06.952 14:07:43 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:06.952 14:07:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:06.952 14:07:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.952 14:07:43 -- common/autotest_common.sh@10 -- # set +x 00:08:06.952 ************************************ 00:08:06.952 START TEST version 00:08:06.952 ************************************ 00:08:06.952 14:07:43 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:08:06.952 * Looking for test storage... 00:08:06.952 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:08:06.952 14:07:43 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:06.952 14:07:43 version -- common/autotest_common.sh@1693 -- # lcov --version 00:08:06.952 14:07:43 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:06.952 14:07:43 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:06.952 14:07:43 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:06.952 14:07:43 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:06.952 14:07:43 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:06.952 14:07:43 version -- scripts/common.sh@336 -- # IFS=.-: 00:08:06.952 14:07:43 version -- scripts/common.sh@336 -- # read -ra ver1 00:08:06.952 14:07:43 version -- scripts/common.sh@337 -- # IFS=.-: 00:08:06.952 14:07:43 version -- scripts/common.sh@337 -- # read -ra ver2 00:08:06.952 14:07:43 version -- scripts/common.sh@338 -- # local 'op=<' 00:08:06.953 14:07:43 version -- scripts/common.sh@340 -- # ver1_l=2 00:08:06.953 14:07:43 version -- scripts/common.sh@341 -- # ver2_l=1 00:08:06.953 14:07:43 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:06.953 14:07:43 version -- scripts/common.sh@344 -- # case "$op" in 00:08:06.953 14:07:43 version -- scripts/common.sh@345 -- # : 1 00:08:06.953 14:07:43 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:06.953 14:07:43 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:06.953 14:07:43 version -- scripts/common.sh@365 -- # decimal 1 00:08:06.953 14:07:43 version -- scripts/common.sh@353 -- # local d=1 00:08:06.953 14:07:43 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:06.953 14:07:43 version -- scripts/common.sh@355 -- # echo 1 00:08:06.953 14:07:43 version -- scripts/common.sh@365 -- # ver1[v]=1 00:08:06.953 14:07:43 version -- scripts/common.sh@366 -- # decimal 2 00:08:06.953 14:07:43 version -- scripts/common.sh@353 -- # local d=2 00:08:06.953 14:07:43 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:06.953 14:07:43 version -- scripts/common.sh@355 -- # echo 2 00:08:06.953 14:07:43 version -- scripts/common.sh@366 -- # ver2[v]=2 00:08:06.953 14:07:43 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:06.953 14:07:43 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:06.953 14:07:43 version -- scripts/common.sh@368 -- # return 0 00:08:06.953 14:07:43 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:06.953 14:07:43 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:06.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.953 --rc genhtml_branch_coverage=1 00:08:06.953 --rc genhtml_function_coverage=1 00:08:06.953 --rc genhtml_legend=1 00:08:06.953 --rc geninfo_all_blocks=1 00:08:06.953 --rc geninfo_unexecuted_blocks=1 00:08:06.953 00:08:06.953 ' 00:08:06.953 14:07:43 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:06.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.953 --rc genhtml_branch_coverage=1 00:08:06.953 --rc genhtml_function_coverage=1 00:08:06.953 --rc genhtml_legend=1 00:08:06.953 --rc geninfo_all_blocks=1 00:08:06.953 --rc geninfo_unexecuted_blocks=1 00:08:06.953 00:08:06.953 ' 00:08:06.953 14:07:43 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:06.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.953 --rc genhtml_branch_coverage=1 00:08:06.953 --rc genhtml_function_coverage=1 00:08:06.953 --rc genhtml_legend=1 00:08:06.953 --rc geninfo_all_blocks=1 00:08:06.953 --rc geninfo_unexecuted_blocks=1 00:08:06.953 00:08:06.953 ' 00:08:06.953 14:07:43 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:06.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.953 --rc genhtml_branch_coverage=1 00:08:06.953 --rc genhtml_function_coverage=1 00:08:06.953 --rc genhtml_legend=1 00:08:06.953 --rc geninfo_all_blocks=1 00:08:06.953 --rc geninfo_unexecuted_blocks=1 00:08:06.953 00:08:06.953 ' 00:08:06.953 14:07:43 version -- app/version.sh@17 -- # get_header_version major 00:08:06.953 14:07:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:06.953 14:07:43 version -- app/version.sh@14 -- # cut -f2 00:08:06.953 14:07:43 version -- app/version.sh@14 -- # tr -d '"' 00:08:06.953 14:07:43 version -- app/version.sh@17 -- # major=25 00:08:06.953 14:07:43 version -- app/version.sh@18 -- # get_header_version minor 00:08:06.953 14:07:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:06.953 14:07:43 version -- app/version.sh@14 -- # tr -d '"' 00:08:06.953 14:07:43 version -- app/version.sh@14 -- # cut -f2 00:08:06.953 14:07:43 version -- app/version.sh@18 -- # minor=1 00:08:06.953 14:07:43 version -- app/version.sh@19 -- # get_header_version patch 00:08:06.953 14:07:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:06.953 14:07:43 version -- app/version.sh@14 -- # cut -f2 00:08:06.953 14:07:43 version -- app/version.sh@14 -- # tr -d '"' 00:08:06.953 14:07:43 version -- app/version.sh@19 -- # patch=0 00:08:06.953 14:07:43 version -- app/version.sh@20 -- # get_header_version suffix 00:08:06.953 14:07:43 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:08:06.953 14:07:43 version -- app/version.sh@14 -- # cut -f2 00:08:06.953 14:07:43 version -- app/version.sh@14 -- # tr -d '"' 00:08:06.953 14:07:43 version -- app/version.sh@20 -- # suffix=-pre 00:08:06.953 14:07:43 version -- app/version.sh@22 -- # version=25.1 00:08:06.953 14:07:43 version -- app/version.sh@25 -- # (( patch != 0 )) 00:08:06.953 14:07:43 version -- app/version.sh@28 -- # version=25.1rc0 00:08:06.953 14:07:43 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:08:06.953 14:07:43 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:08:06.953 14:07:43 version -- app/version.sh@30 -- # py_version=25.1rc0 00:08:06.953 14:07:43 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:08:06.953 00:08:06.953 real 0m0.252s 00:08:06.953 user 0m0.164s 00:08:06.953 sys 0m0.126s 00:08:06.953 14:07:43 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:06.953 14:07:43 version -- common/autotest_common.sh@10 -- # set +x 00:08:06.953 ************************************ 00:08:06.953 END TEST version 00:08:06.953 ************************************ 00:08:06.953 14:07:43 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:08:06.953 14:07:43 -- spdk/autotest.sh@188 -- # [[ 1 -eq 1 ]] 00:08:06.953 14:07:43 -- spdk/autotest.sh@189 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:06.953 14:07:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:06.953 14:07:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.953 14:07:43 -- common/autotest_common.sh@10 -- # set +x 00:08:06.953 ************************************ 00:08:06.953 START TEST bdev_raid 00:08:06.953 ************************************ 00:08:06.953 14:07:43 bdev_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:08:06.953 * Looking for test storage... 00:08:06.953 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:08:06.953 14:07:44 bdev_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:06.953 14:07:44 bdev_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:08:06.953 14:07:44 bdev_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:06.953 14:07:44 bdev_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:06.953 14:07:44 bdev_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:06.953 14:07:44 bdev_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:06.953 14:07:44 bdev_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:06.953 14:07:44 bdev_raid -- scripts/common.sh@336 -- # IFS=.-: 00:08:06.953 14:07:44 bdev_raid -- scripts/common.sh@336 -- # read -ra ver1 00:08:06.953 14:07:44 bdev_raid -- scripts/common.sh@337 -- # IFS=.-: 00:08:06.953 14:07:44 bdev_raid -- scripts/common.sh@337 -- # read -ra ver2 00:08:06.953 14:07:44 bdev_raid -- scripts/common.sh@338 -- # local 'op=<' 00:08:06.953 14:07:44 bdev_raid -- scripts/common.sh@340 -- # ver1_l=2 00:08:06.953 14:07:44 bdev_raid -- scripts/common.sh@341 -- # ver2_l=1 00:08:06.953 14:07:44 bdev_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:06.953 14:07:44 bdev_raid -- scripts/common.sh@344 -- # case "$op" in 00:08:06.953 14:07:44 bdev_raid -- scripts/common.sh@345 -- # : 1 00:08:06.953 14:07:44 bdev_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:06.953 14:07:44 bdev_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:06.953 14:07:44 bdev_raid -- scripts/common.sh@365 -- # decimal 1 00:08:06.953 14:07:44 bdev_raid -- scripts/common.sh@353 -- # local d=1 00:08:06.953 14:07:44 bdev_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:06.953 14:07:44 bdev_raid -- scripts/common.sh@355 -- # echo 1 00:08:06.953 14:07:44 bdev_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:08:06.953 14:07:44 bdev_raid -- scripts/common.sh@366 -- # decimal 2 00:08:06.953 14:07:44 bdev_raid -- scripts/common.sh@353 -- # local d=2 00:08:06.953 14:07:44 bdev_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:06.953 14:07:44 bdev_raid -- scripts/common.sh@355 -- # echo 2 00:08:06.953 14:07:44 bdev_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:08:06.953 14:07:44 bdev_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:06.953 14:07:44 bdev_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:06.953 14:07:44 bdev_raid -- scripts/common.sh@368 -- # return 0 00:08:06.953 14:07:44 bdev_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:06.953 14:07:44 bdev_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:06.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.953 --rc genhtml_branch_coverage=1 00:08:06.953 --rc genhtml_function_coverage=1 00:08:06.953 --rc genhtml_legend=1 00:08:06.953 --rc geninfo_all_blocks=1 00:08:06.953 --rc geninfo_unexecuted_blocks=1 00:08:06.953 00:08:06.953 ' 00:08:06.953 14:07:44 bdev_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:06.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.953 --rc genhtml_branch_coverage=1 00:08:06.953 --rc genhtml_function_coverage=1 00:08:06.953 --rc genhtml_legend=1 00:08:06.953 --rc geninfo_all_blocks=1 00:08:06.953 --rc geninfo_unexecuted_blocks=1 00:08:06.953 00:08:06.953 ' 00:08:06.953 14:07:44 bdev_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:06.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.953 --rc genhtml_branch_coverage=1 00:08:06.953 --rc genhtml_function_coverage=1 00:08:06.953 --rc genhtml_legend=1 00:08:06.953 --rc geninfo_all_blocks=1 00:08:06.953 --rc geninfo_unexecuted_blocks=1 00:08:06.953 00:08:06.953 ' 00:08:06.953 14:07:44 bdev_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:06.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:06.953 --rc genhtml_branch_coverage=1 00:08:06.953 --rc genhtml_function_coverage=1 00:08:06.953 --rc genhtml_legend=1 00:08:06.953 --rc geninfo_all_blocks=1 00:08:06.953 --rc geninfo_unexecuted_blocks=1 00:08:06.953 00:08:06.953 ' 00:08:06.954 14:07:44 bdev_raid -- bdev/bdev_raid.sh@12 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:08:06.954 14:07:44 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:08:06.954 14:07:44 bdev_raid -- bdev/bdev_raid.sh@14 -- # rpc_py=rpc_cmd 00:08:06.954 14:07:44 bdev_raid -- bdev/bdev_raid.sh@946 -- # mkdir -p /raidtest 00:08:06.954 14:07:44 bdev_raid -- bdev/bdev_raid.sh@947 -- # trap 'cleanup; exit 1' EXIT 00:08:06.954 14:07:44 bdev_raid -- bdev/bdev_raid.sh@949 -- # base_blocklen=512 00:08:06.954 14:07:44 bdev_raid -- bdev/bdev_raid.sh@951 -- # run_test raid1_resize_data_offset_test raid_resize_data_offset_test 00:08:06.954 14:07:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:06.954 14:07:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:06.954 14:07:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:06.954 ************************************ 00:08:06.954 START TEST raid1_resize_data_offset_test 00:08:06.954 ************************************ 00:08:06.954 14:07:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1129 -- # raid_resize_data_offset_test 00:08:06.954 14:07:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@917 -- # raid_pid=59855 00:08:06.954 14:07:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@916 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:06.954 14:07:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@918 -- # echo 'Process raid pid: 59855' 00:08:06.954 Process raid pid: 59855 00:08:06.954 14:07:44 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@919 -- # waitforlisten 59855 00:08:06.954 14:07:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@835 -- # '[' -z 59855 ']' 00:08:06.954 14:07:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:06.954 14:07:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:06.954 14:07:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:06.954 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:06.954 14:07:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:06.954 14:07:44 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:07.215 [2024-11-27 14:07:44.285236] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:08:07.215 [2024-11-27 14:07:44.285428] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:07.215 [2024-11-27 14:07:44.477018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:07.475 [2024-11-27 14:07:44.641221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:07.734 [2024-11-27 14:07:44.847997] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.734 [2024-11-27 14:07:44.848069] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:07.993 14:07:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:07.993 14:07:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@868 -- # return 0 00:08:07.993 14:07:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@922 -- # rpc_cmd bdev_malloc_create -b malloc0 64 512 -o 16 00:08:07.993 14:07:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:07.993 14:07:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.252 malloc0 00:08:08.252 14:07:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.252 14:07:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@923 -- # rpc_cmd bdev_malloc_create -b malloc1 64 512 -o 16 00:08:08.252 14:07:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.252 14:07:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.252 malloc1 00:08:08.252 14:07:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.252 14:07:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@924 -- # rpc_cmd bdev_null_create null0 64 512 00:08:08.252 14:07:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.252 14:07:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.252 null0 00:08:08.252 14:07:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.252 14:07:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@926 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''malloc0 malloc1 null0'\''' -s 00:08:08.252 14:07:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.252 14:07:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.252 [2024-11-27 14:07:45.399492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc0 is claimed 00:08:08.252 [2024-11-27 14:07:45.402010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:08.252 [2024-11-27 14:07:45.402087] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev null0 is claimed 00:08:08.252 [2024-11-27 14:07:45.402280] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:08.252 [2024-11-27 14:07:45.402317] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 129024, blocklen 512 00:08:08.252 [2024-11-27 14:07:45.402640] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:08.252 [2024-11-27 14:07:45.402899] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:08.252 [2024-11-27 14:07:45.402925] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:08.252 [2024-11-27 14:07:45.403103] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:08.252 14:07:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.252 14:07:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.252 14:07:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.252 14:07:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.252 14:07:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:08.252 14:07:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.252 14:07:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@929 -- # (( 2048 == 2048 )) 00:08:08.252 14:07:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@931 -- # rpc_cmd bdev_null_delete null0 00:08:08.252 14:07:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.252 14:07:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.252 [2024-11-27 14:07:45.463489] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: null0 00:08:08.252 14:07:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.252 14:07:45 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@935 -- # rpc_cmd bdev_malloc_create -b malloc2 512 512 -o 30 00:08:08.252 14:07:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.252 14:07:45 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.822 malloc2 00:08:08.822 14:07:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.822 14:07:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@936 -- # rpc_cmd bdev_raid_add_base_bdev Raid malloc2 00:08:08.822 14:07:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.822 14:07:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.822 [2024-11-27 14:07:46.011541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:08.822 [2024-11-27 14:07:46.029095] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:08.822 14:07:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.822 [2024-11-27 14:07:46.031742] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev Raid 00:08:08.822 14:07:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # jq -r '.[].base_bdevs_list[2].data_offset' 00:08:08.822 14:07:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:08.822 14:07:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:08.822 14:07:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.822 14:07:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:08.822 14:07:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@939 -- # (( 2070 == 2070 )) 00:08:08.822 14:07:46 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@941 -- # killprocess 59855 00:08:08.822 14:07:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@954 -- # '[' -z 59855 ']' 00:08:08.822 14:07:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@958 -- # kill -0 59855 00:08:08.822 14:07:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # uname 00:08:09.081 14:07:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.081 14:07:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59855 00:08:09.081 killing process with pid 59855 00:08:09.081 14:07:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:09.081 14:07:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:09.081 14:07:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59855' 00:08:09.081 14:07:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@973 -- # kill 59855 00:08:09.081 14:07:46 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@978 -- # wait 59855 00:08:09.081 [2024-11-27 14:07:46.131254] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:09.081 [2024-11-27 14:07:46.131463] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev Raid: Operation canceled 00:08:09.081 [2024-11-27 14:07:46.131546] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:09.081 [2024-11-27 14:07:46.131571] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: malloc2 00:08:09.081 [2024-11-27 14:07:46.165435] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:09.081 [2024-11-27 14:07:46.165896] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:09.081 [2024-11-27 14:07:46.165930] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:10.987 [2024-11-27 14:07:47.779843] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:11.924 ************************************ 00:08:11.924 END TEST raid1_resize_data_offset_test 00:08:11.924 ************************************ 00:08:11.924 14:07:48 bdev_raid.raid1_resize_data_offset_test -- bdev/bdev_raid.sh@943 -- # return 0 00:08:11.924 00:08:11.924 real 0m4.692s 00:08:11.924 user 0m4.612s 00:08:11.924 sys 0m0.645s 00:08:11.924 14:07:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.924 14:07:48 bdev_raid.raid1_resize_data_offset_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.924 14:07:48 bdev_raid -- bdev/bdev_raid.sh@953 -- # run_test raid0_resize_superblock_test raid_resize_superblock_test 0 00:08:11.924 14:07:48 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:11.924 14:07:48 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.924 14:07:48 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:11.924 ************************************ 00:08:11.924 START TEST raid0_resize_superblock_test 00:08:11.924 ************************************ 00:08:11.924 14:07:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 0 00:08:11.924 14:07:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=0 00:08:11.924 14:07:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=59940 00:08:11.924 Process raid pid: 59940 00:08:11.924 14:07:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:11.924 14:07:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 59940' 00:08:11.924 14:07:48 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 59940 00:08:11.924 14:07:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 59940 ']' 00:08:11.924 14:07:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:11.924 14:07:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:11.924 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:11.924 14:07:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:11.924 14:07:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:11.924 14:07:48 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.924 [2024-11-27 14:07:49.026585] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:08:11.924 [2024-11-27 14:07:49.026808] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:12.187 [2024-11-27 14:07:49.207670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:12.187 [2024-11-27 14:07:49.341408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:12.459 [2024-11-27 14:07:49.557191] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.459 [2024-11-27 14:07:49.557236] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:12.717 14:07:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:12.717 14:07:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:12.717 14:07:49 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:08:12.717 14:07:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:12.717 14:07:49 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.285 malloc0 00:08:13.285 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.285 14:07:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:13.285 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.285 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.285 [2024-11-27 14:07:50.546202] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:13.285 [2024-11-27 14:07:50.546270] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.285 [2024-11-27 14:07:50.546310] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:13.285 [2024-11-27 14:07:50.546333] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.285 [2024-11-27 14:07:50.549094] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.285 [2024-11-27 14:07:50.549143] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:13.285 pt0 00:08:13.285 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.285 14:07:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:08:13.285 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.285 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.545 400cba74-57a6-4354-a53c-1b606e85ea04 00:08:13.545 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.545 14:07:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:08:13.545 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.545 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.545 544ce612-4664-4778-a0a9-2bd4e14ff0ae 00:08:13.545 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.545 14:07:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:08:13.545 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.545 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.545 00f326a5-d752-4f95-b10f-c3d4c87a2dd1 00:08:13.545 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.545 14:07:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:08:13.545 14:07:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@870 -- # rpc_cmd bdev_raid_create -n Raid -r 0 -z 64 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:08:13.545 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.545 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.545 [2024-11-27 14:07:50.701154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 544ce612-4664-4778-a0a9-2bd4e14ff0ae is claimed 00:08:13.545 [2024-11-27 14:07:50.701266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 00f326a5-d752-4f95-b10f-c3d4c87a2dd1 is claimed 00:08:13.545 [2024-11-27 14:07:50.701453] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:13.545 [2024-11-27 14:07:50.701478] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 245760, blocklen 512 00:08:13.545 [2024-11-27 14:07:50.701842] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:13.545 [2024-11-27 14:07:50.702090] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:13.545 [2024-11-27 14:07:50.702114] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:13.545 [2024-11-27 14:07:50.702313] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.545 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.545 14:07:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:13.545 14:07:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:08:13.545 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.545 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.545 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.545 14:07:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:08:13.545 14:07:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:13.545 14:07:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:08:13.545 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.545 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.545 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.848 14:07:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:08:13.848 14:07:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:13.848 14:07:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # jq '.[].num_blocks' 00:08:13.848 14:07:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:13.848 14:07:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:13.848 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.848 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.848 [2024-11-27 14:07:50.841451] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.848 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.848 14:07:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:13.848 14:07:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:13.848 14:07:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@880 -- # (( 245760 == 245760 )) 00:08:13.848 14:07:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:08:13.848 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.848 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.848 [2024-11-27 14:07:50.893447] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:13.848 [2024-11-27 14:07:50.893607] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '544ce612-4664-4778-a0a9-2bd4e14ff0ae' was resized: old size 131072, new size 204800 00:08:13.848 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.848 14:07:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:08:13.848 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.848 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.848 [2024-11-27 14:07:50.905332] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:13.848 [2024-11-27 14:07:50.905469] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '00f326a5-d752-4f95-b10f-c3d4c87a2dd1' was resized: old size 131072, new size 204800 00:08:13.848 [2024-11-27 14:07:50.905652] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 245760 to 393216 00:08:13.848 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.848 14:07:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:13.848 14:07:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:08:13.848 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.848 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.848 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.848 14:07:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:08:13.848 14:07:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:13.848 14:07:50 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:08:13.848 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.848 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.848 14:07:50 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.848 14:07:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:08:13.848 14:07:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:13.848 14:07:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:13.848 14:07:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.848 14:07:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.848 14:07:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:13.848 14:07:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # jq '.[].num_blocks' 00:08:13.848 [2024-11-27 14:07:51.029541] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:13.848 14:07:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.848 14:07:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:13.848 14:07:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:13.848 14:07:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@894 -- # (( 393216 == 393216 )) 00:08:13.848 14:07:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:08:13.848 14:07:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.848 14:07:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.848 [2024-11-27 14:07:51.081284] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:08:13.848 [2024-11-27 14:07:51.081516] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:08:13.848 [2024-11-27 14:07:51.081673] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:13.848 [2024-11-27 14:07:51.081822] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:08:13.848 [2024-11-27 14:07:51.082096] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:13.848 [2024-11-27 14:07:51.082249] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:13.848 [2024-11-27 14:07:51.082391] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:13.848 14:07:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.848 14:07:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:13.848 14:07:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.848 14:07:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.848 [2024-11-27 14:07:51.089171] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:13.848 [2024-11-27 14:07:51.089231] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.848 [2024-11-27 14:07:51.089259] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:13.848 [2024-11-27 14:07:51.089276] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.848 [2024-11-27 14:07:51.092142] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.848 [2024-11-27 14:07:51.092193] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:13.848 pt0 00:08:13.848 14:07:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.848 14:07:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:13.848 [2024-11-27 14:07:51.094453] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 544ce612-4664-4778-a0a9-2bd4e14ff0ae 00:08:13.848 [2024-11-27 14:07:51.094523] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 544ce612-4664-4778-a0a9-2bd4e14ff0ae is claimed 00:08:13.849 14:07:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.849 [2024-11-27 14:07:51.094669] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 00f326a5-d752-4f95-b10f-c3d4c87a2dd1 00:08:13.849 [2024-11-27 14:07:51.094703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 00f326a5-d752-4f95-b10f-c3d4c87a2dd1 is claimed 00:08:13.849 14:07:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.849 [2024-11-27 14:07:51.094884] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 00f326a5-d752-4f95-b10f-c3d4c87a2dd1 (2) smaller than existing raid bdev Raid (3) 00:08:13.849 [2024-11-27 14:07:51.094920] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 544ce612-4664-4778-a0a9-2bd4e14ff0ae: File exists 00:08:13.849 [2024-11-27 14:07:51.094986] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:13.849 [2024-11-27 14:07:51.095005] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 393216, blocklen 512 00:08:13.849 [2024-11-27 14:07:51.095332] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:13.849 [2024-11-27 14:07:51.095604] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:13.849 [2024-11-27 14:07:51.095625] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:08:13.849 [2024-11-27 14:07:51.095835] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:13.849 14:07:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:13.849 14:07:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:13.849 14:07:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:13.849 14:07:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:13.849 14:07:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # jq '.[].num_blocks' 00:08:13.849 14:07:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:13.849 14:07:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:13.849 [2024-11-27 14:07:51.109483] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:14.115 14:07:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:14.115 14:07:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:14.115 14:07:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:14.115 14:07:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@905 -- # (( 393216 == 393216 )) 00:08:14.115 14:07:51 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 59940 00:08:14.115 14:07:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 59940 ']' 00:08:14.115 14:07:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 59940 00:08:14.115 14:07:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:14.115 14:07:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:14.115 14:07:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59940 00:08:14.115 killing process with pid 59940 00:08:14.115 14:07:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:14.115 14:07:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:14.115 14:07:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59940' 00:08:14.115 14:07:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 59940 00:08:14.115 [2024-11-27 14:07:51.195258] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:14.115 [2024-11-27 14:07:51.195330] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:14.115 14:07:51 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 59940 00:08:14.115 [2024-11-27 14:07:51.195396] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:14.115 [2024-11-27 14:07:51.195411] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:08:15.493 [2024-11-27 14:07:52.568267] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:16.430 14:07:53 bdev_raid.raid0_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:08:16.430 00:08:16.430 real 0m4.760s 00:08:16.430 user 0m5.078s 00:08:16.430 sys 0m0.653s 00:08:16.430 14:07:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:16.430 ************************************ 00:08:16.430 END TEST raid0_resize_superblock_test 00:08:16.430 ************************************ 00:08:16.430 14:07:53 bdev_raid.raid0_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.689 14:07:53 bdev_raid -- bdev/bdev_raid.sh@954 -- # run_test raid1_resize_superblock_test raid_resize_superblock_test 1 00:08:16.689 14:07:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:16.689 14:07:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:16.689 14:07:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:16.689 ************************************ 00:08:16.689 START TEST raid1_resize_superblock_test 00:08:16.689 ************************************ 00:08:16.689 14:07:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1129 -- # raid_resize_superblock_test 1 00:08:16.689 14:07:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@854 -- # local raid_level=1 00:08:16.689 Process raid pid: 60038 00:08:16.689 14:07:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@857 -- # raid_pid=60038 00:08:16.689 14:07:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@858 -- # echo 'Process raid pid: 60038' 00:08:16.689 14:07:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@856 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:16.689 14:07:53 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@859 -- # waitforlisten 60038 00:08:16.689 14:07:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 60038 ']' 00:08:16.689 14:07:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:16.689 14:07:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:16.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:16.689 14:07:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:16.689 14:07:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:16.689 14:07:53 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.689 [2024-11-27 14:07:53.823632] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:08:16.689 [2024-11-27 14:07:53.823981] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:16.948 [2024-11-27 14:07:53.997296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:16.948 [2024-11-27 14:07:54.139251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.207 [2024-11-27 14:07:54.368306] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.207 [2024-11-27 14:07:54.368349] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.774 14:07:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:17.774 14:07:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:17.774 14:07:54 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@861 -- # rpc_cmd bdev_malloc_create -b malloc0 512 512 00:08:17.774 14:07:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:17.774 14:07:54 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.342 malloc0 00:08:18.342 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.342 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@863 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:18.342 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.342 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.342 [2024-11-27 14:07:55.454876] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:18.342 [2024-11-27 14:07:55.455095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.342 [2024-11-27 14:07:55.455172] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:18.342 [2024-11-27 14:07:55.455447] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.342 [2024-11-27 14:07:55.458286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.342 [2024-11-27 14:07:55.458456] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:18.342 pt0 00:08:18.342 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.342 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@864 -- # rpc_cmd bdev_lvol_create_lvstore pt0 lvs0 00:08:18.342 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.342 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.342 86d2fc7a-d482-47dd-ba9e-114a79fb8e89 00:08:18.342 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.342 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@866 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol0 64 00:08:18.342 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.342 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.342 36a3446b-fffc-4230-a721-b233d637ba8d 00:08:18.342 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.342 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@867 -- # rpc_cmd bdev_lvol_create -l lvs0 lvol1 64 00:08:18.342 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.342 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.342 9b12c4ea-ec92-4eb6-bf39-4ad4f61874ff 00:08:18.342 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.342 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@869 -- # case $raid_level in 00:08:18.342 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@871 -- # rpc_cmd bdev_raid_create -n Raid -r 1 -b ''\''lvs0/lvol0 lvs0/lvol1'\''' -s 00:08:18.342 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.342 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.342 [2024-11-27 14:07:55.599413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 36a3446b-fffc-4230-a721-b233d637ba8d is claimed 00:08:18.342 [2024-11-27 14:07:55.599677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9b12c4ea-ec92-4eb6-bf39-4ad4f61874ff is claimed 00:08:18.342 [2024-11-27 14:07:55.599901] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:18.342 [2024-11-27 14:07:55.599929] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 122880, blocklen 512 00:08:18.342 [2024-11-27 14:07:55.600258] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:18.342 [2024-11-27 14:07:55.600550] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:18.342 [2024-11-27 14:07:55.600567] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:18.342 [2024-11-27 14:07:55.600757] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:18.342 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.342 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:18.342 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.342 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.342 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # jq '.[].num_blocks' 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@875 -- # (( 64 == 64 )) 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # jq '.[].num_blocks' 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@876 -- # (( 64 == 64 )) 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # jq '.[].num_blocks' 00:08:18.601 [2024-11-27 14:07:55.719813] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@879 -- # case $raid_level in 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@881 -- # (( 122880 == 122880 )) 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@885 -- # rpc_cmd bdev_lvol_resize lvs0/lvol0 100 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.601 [2024-11-27 14:07:55.775790] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:18.601 [2024-11-27 14:07:55.775829] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '36a3446b-fffc-4230-a721-b233d637ba8d' was resized: old size 131072, new size 204800 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@886 -- # rpc_cmd bdev_lvol_resize lvs0/lvol1 100 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.601 [2024-11-27 14:07:55.783654] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:18.601 [2024-11-27 14:07:55.783827] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev '9b12c4ea-ec92-4eb6-bf39-4ad4f61874ff' was resized: old size 131072, new size 204800 00:08:18.601 [2024-11-27 14:07:55.783882] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 122880 to 196608 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol0 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # jq '.[].num_blocks' 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@889 -- # (( 100 == 100 )) 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # rpc_cmd bdev_get_bdevs -b lvs0/lvol1 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # jq '.[].num_blocks' 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.601 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.861 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@890 -- # (( 100 == 100 )) 00:08:18.861 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:18.861 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:18.861 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.861 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.861 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:18.861 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # jq '.[].num_blocks' 00:08:18.861 [2024-11-27 14:07:55.903854] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:18.861 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.861 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:18.861 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@893 -- # case $raid_level in 00:08:18.861 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@895 -- # (( 196608 == 196608 )) 00:08:18.861 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@898 -- # rpc_cmd bdev_passthru_delete pt0 00:08:18.861 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.861 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.861 [2024-11-27 14:07:55.947571] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev pt0 being removed: closing lvstore lvs0 00:08:18.861 [2024-11-27 14:07:55.947828] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol0 00:08:18.861 [2024-11-27 14:07:55.947876] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: lvs0/lvol1 00:08:18.861 [2024-11-27 14:07:55.948083] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:18.861 [2024-11-27 14:07:55.948370] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:18.861 [2024-11-27 14:07:55.948515] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:18.861 [2024-11-27 14:07:55.948537] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:18.861 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.861 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@899 -- # rpc_cmd bdev_passthru_create -b malloc0 -p pt0 00:08:18.861 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.861 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.861 [2024-11-27 14:07:55.955465] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc0 00:08:18.861 [2024-11-27 14:07:55.955556] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.861 [2024-11-27 14:07:55.955585] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008180 00:08:18.861 [2024-11-27 14:07:55.955604] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.861 [2024-11-27 14:07:55.958701] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.861 [2024-11-27 14:07:55.958882] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt0 00:08:18.861 pt0 00:08:18.861 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.861 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@900 -- # rpc_cmd bdev_wait_for_examine 00:08:18.861 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.861 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.861 [2024-11-27 14:07:55.961172] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 36a3446b-fffc-4230-a721-b233d637ba8d 00:08:18.861 [2024-11-27 14:07:55.961268] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 36a3446b-fffc-4230-a721-b233d637ba8d is claimed 00:08:18.861 [2024-11-27 14:07:55.961408] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev 9b12c4ea-ec92-4eb6-bf39-4ad4f61874ff 00:08:18.862 [2024-11-27 14:07:55.961442] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev 9b12c4ea-ec92-4eb6-bf39-4ad4f61874ff is claimed 00:08:18.862 [2024-11-27 14:07:55.961595] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev 9b12c4ea-ec92-4eb6-bf39-4ad4f61874ff (2) smaller than existing raid bdev Raid (3) 00:08:18.862 [2024-11-27 14:07:55.961628] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev 36a3446b-fffc-4230-a721-b233d637ba8d: File exists 00:08:18.862 [2024-11-27 14:07:55.961685] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:08:18.862 [2024-11-27 14:07:55.961704] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:08:18.862 [2024-11-27 14:07:55.962040] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:08:18.862 [2024-11-27 14:07:55.962245] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:08:18.862 [2024-11-27 14:07:55.962309] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007b00 00:08:18.862 [2024-11-27 14:07:55.962504] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:18.862 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.862 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:18.862 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:18.862 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.862 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:18.862 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:18.862 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # jq '.[].num_blocks' 00:08:18.862 [2024-11-27 14:07:55.975866] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:18.862 14:07:55 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.862 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:18.862 14:07:55 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@904 -- # case $raid_level in 00:08:18.862 14:07:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@906 -- # (( 196608 == 196608 )) 00:08:18.862 14:07:56 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@909 -- # killprocess 60038 00:08:18.862 14:07:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 60038 ']' 00:08:18.862 14:07:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@958 -- # kill -0 60038 00:08:18.862 14:07:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:18.862 14:07:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:18.862 14:07:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60038 00:08:18.862 killing process with pid 60038 00:08:18.862 14:07:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:18.862 14:07:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:18.862 14:07:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60038' 00:08:18.862 14:07:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@973 -- # kill 60038 00:08:18.862 [2024-11-27 14:07:56.057369] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:18.862 14:07:56 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@978 -- # wait 60038 00:08:18.862 [2024-11-27 14:07:56.057451] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:18.862 [2024-11-27 14:07:56.057524] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:18.862 [2024-11-27 14:07:56.057538] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Raid, state offline 00:08:20.271 [2024-11-27 14:07:57.424724] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:21.209 14:07:58 bdev_raid.raid1_resize_superblock_test -- bdev/bdev_raid.sh@911 -- # return 0 00:08:21.209 00:08:21.209 real 0m4.728s 00:08:21.209 user 0m5.138s 00:08:21.209 sys 0m0.601s 00:08:21.209 14:07:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:21.209 14:07:58 bdev_raid.raid1_resize_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:21.209 ************************************ 00:08:21.209 END TEST raid1_resize_superblock_test 00:08:21.209 ************************************ 00:08:21.468 14:07:58 bdev_raid -- bdev/bdev_raid.sh@956 -- # uname -s 00:08:21.468 14:07:58 bdev_raid -- bdev/bdev_raid.sh@956 -- # '[' Linux = Linux ']' 00:08:21.468 14:07:58 bdev_raid -- bdev/bdev_raid.sh@956 -- # modprobe -n nbd 00:08:21.468 14:07:58 bdev_raid -- bdev/bdev_raid.sh@957 -- # has_nbd=true 00:08:21.468 14:07:58 bdev_raid -- bdev/bdev_raid.sh@958 -- # modprobe nbd 00:08:21.468 14:07:58 bdev_raid -- bdev/bdev_raid.sh@959 -- # run_test raid_function_test_raid0 raid_function_test raid0 00:08:21.468 14:07:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:21.468 14:07:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:21.468 14:07:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:21.468 ************************************ 00:08:21.468 START TEST raid_function_test_raid0 00:08:21.468 ************************************ 00:08:21.468 14:07:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1129 -- # raid_function_test raid0 00:08:21.468 14:07:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@64 -- # local raid_level=raid0 00:08:21.468 14:07:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:08:21.468 14:07:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:08:21.468 Process raid pid: 60143 00:08:21.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.468 14:07:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@69 -- # raid_pid=60143 00:08:21.468 14:07:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60143' 00:08:21.468 14:07:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@71 -- # waitforlisten 60143 00:08:21.468 14:07:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@835 -- # '[' -z 60143 ']' 00:08:21.468 14:07:58 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:21.468 14:07:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.468 14:07:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:21.468 14:07:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.468 14:07:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:21.468 14:07:58 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:21.468 [2024-11-27 14:07:58.645456] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:08:21.468 [2024-11-27 14:07:58.646008] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.728 [2024-11-27 14:07:58.835243] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.728 [2024-11-27 14:07:58.986068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:21.988 [2024-11-27 14:07:59.197720] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:21.988 [2024-11-27 14:07:59.197828] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:22.555 14:07:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:22.555 14:07:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@868 -- # return 0 00:08:22.555 14:07:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:08:22.555 14:07:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.555 14:07:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:22.555 Base_1 00:08:22.555 14:07:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.555 14:07:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:08:22.555 14:07:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.555 14:07:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:22.555 Base_2 00:08:22.555 14:07:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.555 14:07:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''Base_1 Base_2'\''' -n raid 00:08:22.555 14:07:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.555 14:07:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:22.555 [2024-11-27 14:07:59.743449] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:22.555 [2024-11-27 14:07:59.746131] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:22.555 [2024-11-27 14:07:59.746363] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:22.555 [2024-11-27 14:07:59.746393] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:22.555 [2024-11-27 14:07:59.746757] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:22.555 [2024-11-27 14:07:59.747022] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:22.555 [2024-11-27 14:07:59.747038] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:08:22.555 [2024-11-27 14:07:59.747284] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.555 14:07:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.555 14:07:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:08:22.555 14:07:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:22.555 14:07:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:22.555 14:07:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:22.555 14:07:59 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:22.555 14:07:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:08:22.555 14:07:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:08:22.555 14:07:59 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:08:22.555 14:07:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:08:22.555 14:07:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:08:22.555 14:07:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:22.555 14:07:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:08:22.555 14:07:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:22.555 14:07:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@12 -- # local i 00:08:22.555 14:07:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:22.555 14:07:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:22.555 14:07:59 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:08:23.123 [2024-11-27 14:08:00.091662] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:23.123 /dev/nbd0 00:08:23.123 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:23.123 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:23.123 14:08:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:23.123 14:08:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@873 -- # local i 00:08:23.123 14:08:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:23.123 14:08:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:23.123 14:08:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:23.123 14:08:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@877 -- # break 00:08:23.123 14:08:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:23.123 14:08:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:23.123 14:08:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:23.123 1+0 records in 00:08:23.123 1+0 records out 00:08:23.123 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000340014 s, 12.0 MB/s 00:08:23.123 14:08:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:23.123 14:08:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@890 -- # size=4096 00:08:23.123 14:08:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:23.123 14:08:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:23.123 14:08:00 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@893 -- # return 0 00:08:23.123 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:23.123 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:23.123 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:08:23.123 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:23.123 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:23.382 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:23.382 { 00:08:23.382 "nbd_device": "/dev/nbd0", 00:08:23.382 "bdev_name": "raid" 00:08:23.382 } 00:08:23.382 ]' 00:08:23.382 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:23.382 { 00:08:23.382 "nbd_device": "/dev/nbd0", 00:08:23.382 "bdev_name": "raid" 00:08:23.382 } 00:08:23.382 ]' 00:08:23.382 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:23.382 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:08:23.382 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:23.382 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:08:23.382 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=1 00:08:23.382 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 1 00:08:23.382 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@84 -- # count=1 00:08:23.382 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:08:23.382 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:08:23.382 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:08:23.382 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:08:23.382 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@19 -- # local blksize 00:08:23.383 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:08:23.383 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:08:23.383 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:08:23.383 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@20 -- # blksize=512 00:08:23.383 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:08:23.383 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:08:23.383 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:08:23.383 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:08:23.383 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:08:23.383 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:08:23.383 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:08:23.383 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:08:23.383 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:08:23.383 4096+0 records in 00:08:23.383 4096+0 records out 00:08:23.383 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0301527 s, 69.6 MB/s 00:08:23.383 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:08:23.951 4096+0 records in 00:08:23.951 4096+0 records out 00:08:23.951 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.348624 s, 6.0 MB/s 00:08:23.951 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:08:23.951 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:23.951 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:08:23.951 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:23.951 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:08:23.951 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:08:23.951 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:08:23.951 128+0 records in 00:08:23.951 128+0 records out 00:08:23.951 65536 bytes (66 kB, 64 KiB) copied, 0.00114227 s, 57.4 MB/s 00:08:23.951 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:08:23.951 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:23.951 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:23.951 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:23.951 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:23.951 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:08:23.951 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:08:23.951 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:08:23.951 2035+0 records in 00:08:23.951 2035+0 records out 00:08:23.951 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.0085789 s, 121 MB/s 00:08:23.951 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:08:23.951 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:23.951 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:23.951 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:23.951 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:23.951 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:08:23.951 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:08:23.951 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:08:23.951 456+0 records in 00:08:23.951 456+0 records out 00:08:23.951 233472 bytes (233 kB, 228 KiB) copied, 0.00335111 s, 69.7 MB/s 00:08:23.951 14:08:00 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:08:23.951 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:23.951 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:23.951 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:23.951 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:23.951 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@52 -- # return 0 00:08:23.951 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:08:23.951 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:08:23.951 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:23.951 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:23.951 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@51 -- # local i 00:08:23.951 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:23.951 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:08:24.210 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:24.210 [2024-11-27 14:08:01.360385] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:24.210 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:24.210 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:24.210 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:24.210 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:24.210 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:24.210 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@41 -- # break 00:08:24.210 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@45 -- # return 0 00:08:24.210 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:08:24.210 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:24.210 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:24.468 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:24.468 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:24.469 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:24.469 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:24.728 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # echo '' 00:08:24.728 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:24.728 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # true 00:08:24.728 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@65 -- # count=0 00:08:24.728 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/nbd_common.sh@66 -- # echo 0 00:08:24.728 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@92 -- # count=0 00:08:24.728 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:08:24.728 14:08:01 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@97 -- # killprocess 60143 00:08:24.728 14:08:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@954 -- # '[' -z 60143 ']' 00:08:24.728 14:08:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@958 -- # kill -0 60143 00:08:24.728 14:08:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # uname 00:08:24.728 14:08:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:24.728 14:08:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60143 00:08:24.728 killing process with pid 60143 00:08:24.728 14:08:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:24.728 14:08:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:24.728 14:08:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60143' 00:08:24.728 14:08:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@973 -- # kill 60143 00:08:24.728 [2024-11-27 14:08:01.787743] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:24.728 14:08:01 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@978 -- # wait 60143 00:08:24.728 [2024-11-27 14:08:01.787897] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:24.728 [2024-11-27 14:08:01.787963] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:24.728 [2024-11-27 14:08:01.787987] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:08:24.728 [2024-11-27 14:08:01.974939] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:26.149 ************************************ 00:08:26.149 END TEST raid_function_test_raid0 00:08:26.149 ************************************ 00:08:26.149 14:08:03 bdev_raid.raid_function_test_raid0 -- bdev/bdev_raid.sh@99 -- # return 0 00:08:26.149 00:08:26.149 real 0m4.479s 00:08:26.149 user 0m5.524s 00:08:26.149 sys 0m1.099s 00:08:26.149 14:08:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:26.149 14:08:03 bdev_raid.raid_function_test_raid0 -- common/autotest_common.sh@10 -- # set +x 00:08:26.149 14:08:03 bdev_raid -- bdev/bdev_raid.sh@960 -- # run_test raid_function_test_concat raid_function_test concat 00:08:26.149 14:08:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:26.149 14:08:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:26.149 14:08:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:26.149 ************************************ 00:08:26.149 START TEST raid_function_test_concat 00:08:26.149 ************************************ 00:08:26.149 14:08:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1129 -- # raid_function_test concat 00:08:26.149 Process raid pid: 60278 00:08:26.149 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:26.149 14:08:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@64 -- # local raid_level=concat 00:08:26.149 14:08:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@65 -- # local nbd=/dev/nbd0 00:08:26.149 14:08:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@66 -- # local raid_bdev 00:08:26.149 14:08:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@69 -- # raid_pid=60278 00:08:26.149 14:08:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@70 -- # echo 'Process raid pid: 60278' 00:08:26.149 14:08:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@71 -- # waitforlisten 60278 00:08:26.149 14:08:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@835 -- # '[' -z 60278 ']' 00:08:26.149 14:08:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:26.149 14:08:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.149 14:08:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:26.149 14:08:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.149 14:08:03 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:26.149 14:08:03 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:26.149 [2024-11-27 14:08:03.183896] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:08:26.149 [2024-11-27 14:08:03.184385] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:26.149 [2024-11-27 14:08:03.378477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.408 [2024-11-27 14:08:03.539738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:26.666 [2024-11-27 14:08:03.750380] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:26.666 [2024-11-27 14:08:03.750679] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:27.234 14:08:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:27.234 14:08:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@868 -- # return 0 00:08:27.234 14:08:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@73 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_1 00:08:27.234 14:08:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.234 14:08:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:27.234 Base_1 00:08:27.234 14:08:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.234 14:08:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@74 -- # rpc_cmd bdev_malloc_create 32 512 -b Base_2 00:08:27.234 14:08:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.235 14:08:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:27.235 Base_2 00:08:27.235 14:08:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.235 14:08:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@75 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''Base_1 Base_2'\''' -n raid 00:08:27.235 14:08:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.235 14:08:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:27.235 [2024-11-27 14:08:04.330253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:27.235 [2024-11-27 14:08:04.332835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:27.235 [2024-11-27 14:08:04.332979] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:27.235 [2024-11-27 14:08:04.333004] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:27.235 [2024-11-27 14:08:04.333343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:27.235 [2024-11-27 14:08:04.333549] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:27.235 [2024-11-27 14:08:04.333564] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid, raid_bdev 0x617000007780 00:08:27.235 [2024-11-27 14:08:04.333760] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:27.235 14:08:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.235 14:08:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # rpc_cmd bdev_raid_get_bdevs online 00:08:27.235 14:08:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.235 14:08:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:27.235 14:08:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # jq -r '.[0]["name"] | select(.)' 00:08:27.235 14:08:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.235 14:08:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@77 -- # raid_bdev=raid 00:08:27.235 14:08:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@78 -- # '[' raid = '' ']' 00:08:27.235 14:08:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@83 -- # nbd_start_disks /var/tmp/spdk.sock raid /dev/nbd0 00:08:27.235 14:08:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:08:27.235 14:08:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # bdev_list=('raid') 00:08:27.235 14:08:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:08:27.235 14:08:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:08:27.235 14:08:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:08:27.235 14:08:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@12 -- # local i 00:08:27.235 14:08:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:08:27.235 14:08:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:27.235 14:08:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid /dev/nbd0 00:08:27.494 [2024-11-27 14:08:04.622374] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:08:27.494 /dev/nbd0 00:08:27.494 14:08:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:08:27.494 14:08:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:08:27.494 14:08:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:08:27.494 14:08:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@873 -- # local i 00:08:27.494 14:08:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:08:27.494 14:08:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:08:27.494 14:08:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:08:27.494 14:08:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@877 -- # break 00:08:27.494 14:08:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:08:27.494 14:08:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:08:27.494 14:08:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:08:27.494 1+0 records in 00:08:27.494 1+0 records out 00:08:27.494 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00027674 s, 14.8 MB/s 00:08:27.494 14:08:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.494 14:08:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@890 -- # size=4096 00:08:27.494 14:08:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:08:27.494 14:08:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:08:27.494 14:08:04 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@893 -- # return 0 00:08:27.494 14:08:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:08:27.494 14:08:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:08:27.494 14:08:04 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # nbd_get_count /var/tmp/spdk.sock 00:08:27.494 14:08:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:27.494 14:08:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:27.753 14:08:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:08:27.753 { 00:08:27.753 "nbd_device": "/dev/nbd0", 00:08:27.753 "bdev_name": "raid" 00:08:27.753 } 00:08:27.753 ]' 00:08:27.753 14:08:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[ 00:08:27.753 { 00:08:27.753 "nbd_device": "/dev/nbd0", 00:08:27.753 "bdev_name": "raid" 00:08:27.753 } 00:08:27.753 ]' 00:08:27.753 14:08:04 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:28.012 14:08:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:08:28.012 14:08:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:08:28.012 14:08:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:28.012 14:08:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=1 00:08:28.012 14:08:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 1 00:08:28.012 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@84 -- # count=1 00:08:28.012 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@85 -- # '[' 1 -ne 1 ']' 00:08:28.012 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@89 -- # raid_unmap_data_verify /dev/nbd0 00:08:28.012 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@17 -- # hash blkdiscard 00:08:28.012 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@18 -- # local nbd=/dev/nbd0 00:08:28.012 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@19 -- # local blksize 00:08:28.012 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # lsblk -o LOG-SEC /dev/nbd0 00:08:28.012 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # grep -v LOG-SEC 00:08:28.012 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # cut -d ' ' -f 5 00:08:28.012 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@20 -- # blksize=512 00:08:28.012 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@21 -- # local rw_blk_num=4096 00:08:28.012 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@22 -- # local rw_len=2097152 00:08:28.012 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # unmap_blk_offs=('0' '1028' '321') 00:08:28.012 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@23 -- # local unmap_blk_offs 00:08:28.012 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # unmap_blk_nums=('128' '2035' '456') 00:08:28.012 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@24 -- # local unmap_blk_nums 00:08:28.012 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@25 -- # local unmap_off 00:08:28.012 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@26 -- # local unmap_len 00:08:28.012 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@29 -- # dd if=/dev/urandom of=/raidtest/raidrandtest bs=512 count=4096 00:08:28.012 4096+0 records in 00:08:28.012 4096+0 records out 00:08:28.012 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.0241926 s, 86.7 MB/s 00:08:28.012 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@30 -- # dd if=/raidtest/raidrandtest of=/dev/nbd0 bs=512 count=4096 oflag=direct 00:08:28.271 4096+0 records in 00:08:28.271 4096+0 records out 00:08:28.271 2097152 bytes (2.1 MB, 2.0 MiB) copied, 0.329548 s, 6.4 MB/s 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@31 -- # blockdev --flushbufs /dev/nbd0 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@34 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i = 0 )) 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=0 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=65536 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=0 count=128 conv=notrunc 00:08:28.271 128+0 records in 00:08:28.271 128+0 records out 00:08:28.271 65536 bytes (66 kB, 64 KiB) copied, 0.00102565 s, 63.9 MB/s 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 0 -l 65536 /dev/nbd0 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=526336 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=1041920 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=1028 count=2035 conv=notrunc 00:08:28.271 2035+0 records in 00:08:28.271 2035+0 records out 00:08:28.271 1041920 bytes (1.0 MB, 1018 KiB) copied, 0.00718244 s, 145 MB/s 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 526336 -l 1041920 /dev/nbd0 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@37 -- # unmap_off=164352 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@38 -- # unmap_len=233472 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@41 -- # dd if=/dev/zero of=/raidtest/raidrandtest bs=512 seek=321 count=456 conv=notrunc 00:08:28.271 456+0 records in 00:08:28.271 456+0 records out 00:08:28.271 233472 bytes (233 kB, 228 KiB) copied, 0.00214916 s, 109 MB/s 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@44 -- # blkdiscard -o 164352 -l 233472 /dev/nbd0 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@45 -- # blockdev --flushbufs /dev/nbd0 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@48 -- # cmp -b -n 2097152 /raidtest/raidrandtest /dev/nbd0 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i++ )) 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@36 -- # (( i < 3 )) 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@52 -- # return 0 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@91 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@51 -- # local i 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:08:28.271 14:08:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:08:28.839 14:08:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:08:28.839 [2024-11-27 14:08:05.828586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:28.839 14:08:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:08:28.840 14:08:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:08:28.840 14:08:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:08:28.840 14:08:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:08:28.840 14:08:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:08:28.840 14:08:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@41 -- # break 00:08:28.840 14:08:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@45 -- # return 0 00:08:28.840 14:08:05 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # nbd_get_count /var/tmp/spdk.sock 00:08:28.840 14:08:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk.sock 00:08:28.840 14:08:05 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_get_disks 00:08:29.099 14:08:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:08:29.099 14:08:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:08:29.099 14:08:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:08:29.099 14:08:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:08:29.099 14:08:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # echo '' 00:08:29.099 14:08:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:08:29.099 14:08:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # true 00:08:29.099 14:08:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@65 -- # count=0 00:08:29.099 14:08:06 bdev_raid.raid_function_test_concat -- bdev/nbd_common.sh@66 -- # echo 0 00:08:29.099 14:08:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@92 -- # count=0 00:08:29.099 14:08:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@93 -- # '[' 0 -ne 0 ']' 00:08:29.099 14:08:06 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@97 -- # killprocess 60278 00:08:29.099 14:08:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@954 -- # '[' -z 60278 ']' 00:08:29.099 14:08:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@958 -- # kill -0 60278 00:08:29.099 14:08:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # uname 00:08:29.099 14:08:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:29.099 14:08:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60278 00:08:29.099 14:08:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:29.099 14:08:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:29.099 killing process with pid 60278 00:08:29.099 14:08:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60278' 00:08:29.099 14:08:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@973 -- # kill 60278 00:08:29.099 [2024-11-27 14:08:06.230307] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:29.099 14:08:06 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@978 -- # wait 60278 00:08:29.099 [2024-11-27 14:08:06.230419] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:29.099 [2024-11-27 14:08:06.230484] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:29.099 [2024-11-27 14:08:06.230503] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid, state offline 00:08:29.359 [2024-11-27 14:08:06.401567] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:30.382 14:08:07 bdev_raid.raid_function_test_concat -- bdev/bdev_raid.sh@99 -- # return 0 00:08:30.382 00:08:30.382 real 0m4.367s 00:08:30.382 user 0m5.388s 00:08:30.382 sys 0m1.002s 00:08:30.382 ************************************ 00:08:30.382 END TEST raid_function_test_concat 00:08:30.382 ************************************ 00:08:30.382 14:08:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:30.382 14:08:07 bdev_raid.raid_function_test_concat -- common/autotest_common.sh@10 -- # set +x 00:08:30.382 14:08:07 bdev_raid -- bdev/bdev_raid.sh@963 -- # run_test raid0_resize_test raid_resize_test 0 00:08:30.382 14:08:07 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:30.382 14:08:07 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:30.382 14:08:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:30.382 ************************************ 00:08:30.382 START TEST raid0_resize_test 00:08:30.382 ************************************ 00:08:30.382 14:08:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 0 00:08:30.382 14:08:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=0 00:08:30.382 14:08:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:08:30.382 14:08:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:08:30.382 14:08:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:08:30.382 14:08:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:08:30.382 14:08:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:08:30.382 14:08:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:08:30.382 14:08:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:08:30.382 14:08:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60412 00:08:30.382 Process raid pid: 60412 00:08:30.382 14:08:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60412' 00:08:30.382 14:08:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:30.382 14:08:07 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60412 00:08:30.382 14:08:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60412 ']' 00:08:30.382 14:08:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.382 14:08:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:30.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.382 14:08:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.382 14:08:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:30.382 14:08:07 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.382 [2024-11-27 14:08:07.578665] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:08:30.382 [2024-11-27 14:08:07.578830] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.642 [2024-11-27 14:08:07.749830] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:30.642 [2024-11-27 14:08:07.871763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.899 [2024-11-27 14:08:08.075994] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:30.899 [2024-11-27 14:08:08.076041] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:31.466 14:08:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:31.466 14:08:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@868 -- # return 0 00:08:31.466 14:08:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:31.466 14:08:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.466 14:08:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.466 Base_1 00:08:31.466 14:08:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.466 14:08:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:31.466 14:08:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.466 14:08:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.466 Base_2 00:08:31.466 14:08:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.466 14:08:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 0 -eq 0 ']' 00:08:31.466 14:08:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # rpc_cmd bdev_raid_create -z 64 -r 0 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:31.466 14:08:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.466 14:08:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.466 [2024-11-27 14:08:08.575953] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:31.466 [2024-11-27 14:08:08.578659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:31.466 [2024-11-27 14:08:08.578745] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:31.466 [2024-11-27 14:08:08.578765] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:31.466 [2024-11-27 14:08:08.579138] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:31.466 [2024-11-27 14:08:08.579289] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:31.466 [2024-11-27 14:08:08.579302] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:31.466 [2024-11-27 14:08:08.579476] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:31.466 14:08:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.466 14:08:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:31.466 14:08:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.467 [2024-11-27 14:08:08.583945] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:31.467 [2024-11-27 14:08:08.583978] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:31.467 true 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.467 [2024-11-27 14:08:08.596203] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=131072 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=64 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 0 -eq 0 ']' 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # expected_size=64 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 64 '!=' 64 ']' 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.467 [2024-11-27 14:08:08.640001] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:31.467 [2024-11-27 14:08:08.640034] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:31.467 [2024-11-27 14:08:08.640093] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:08:31.467 true 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:31.467 [2024-11-27 14:08:08.652224] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=262144 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=128 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 0 -eq 0 ']' 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@378 -- # expected_size=128 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 128 '!=' 128 ']' 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60412 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60412 ']' 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # kill -0 60412 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # uname 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60412 00:08:31.467 killing process with pid 60412 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60412' 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@973 -- # kill 60412 00:08:31.467 [2024-11-27 14:08:08.734017] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:31.467 14:08:08 bdev_raid.raid0_resize_test -- common/autotest_common.sh@978 -- # wait 60412 00:08:31.467 [2024-11-27 14:08:08.734135] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:31.467 [2024-11-27 14:08:08.734231] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:31.467 [2024-11-27 14:08:08.734244] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:31.725 [2024-11-27 14:08:08.750757] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:32.707 ************************************ 00:08:32.707 END TEST raid0_resize_test 00:08:32.707 ************************************ 00:08:32.707 14:08:09 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:32.707 00:08:32.707 real 0m2.270s 00:08:32.707 user 0m2.490s 00:08:32.707 sys 0m0.388s 00:08:32.707 14:08:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:32.707 14:08:09 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.707 14:08:09 bdev_raid -- bdev/bdev_raid.sh@964 -- # run_test raid1_resize_test raid_resize_test 1 00:08:32.707 14:08:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:32.707 14:08:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:32.707 14:08:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:32.707 ************************************ 00:08:32.707 START TEST raid1_resize_test 00:08:32.707 ************************************ 00:08:32.707 14:08:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1129 -- # raid_resize_test 1 00:08:32.707 14:08:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@332 -- # local raid_level=1 00:08:32.707 14:08:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@333 -- # local blksize=512 00:08:32.707 14:08:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@334 -- # local bdev_size_mb=32 00:08:32.707 14:08:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@335 -- # local new_bdev_size_mb=64 00:08:32.707 14:08:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@336 -- # local blkcnt 00:08:32.707 14:08:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@337 -- # local raid_size_mb 00:08:32.707 14:08:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@338 -- # local new_raid_size_mb 00:08:32.707 14:08:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@339 -- # local expected_size 00:08:32.707 14:08:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@342 -- # raid_pid=60468 00:08:32.707 14:08:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@341 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:32.707 Process raid pid: 60468 00:08:32.707 14:08:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@343 -- # echo 'Process raid pid: 60468' 00:08:32.707 14:08:09 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@344 -- # waitforlisten 60468 00:08:32.707 14:08:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@835 -- # '[' -z 60468 ']' 00:08:32.707 14:08:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:32.707 14:08:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:32.707 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:32.707 14:08:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:32.707 14:08:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:32.707 14:08:09 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:32.707 [2024-11-27 14:08:09.906868] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:08:32.707 [2024-11-27 14:08:09.907060] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:32.964 [2024-11-27 14:08:10.079740] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:32.964 [2024-11-27 14:08:10.214851] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:33.222 [2024-11-27 14:08:10.415125] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:33.222 [2024-11-27 14:08:10.415195] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:33.788 14:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:33.788 14:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@868 -- # return 0 00:08:33.788 14:08:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@346 -- # rpc_cmd bdev_null_create Base_1 32 512 00:08:33.788 14:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.788 14:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.788 Base_1 00:08:33.788 14:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.788 14:08:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@347 -- # rpc_cmd bdev_null_create Base_2 32 512 00:08:33.788 14:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.788 14:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.788 Base_2 00:08:33.788 14:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.788 14:08:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@349 -- # '[' 1 -eq 0 ']' 00:08:33.788 14:08:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@352 -- # rpc_cmd bdev_raid_create -r 1 -b ''\''Base_1 Base_2'\''' -n Raid 00:08:33.788 14:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.788 14:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.788 [2024-11-27 14:08:11.027649] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:08:33.789 [2024-11-27 14:08:11.029951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:08:33.789 [2024-11-27 14:08:11.030040] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:33.789 [2024-11-27 14:08:11.030058] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:33.789 [2024-11-27 14:08:11.030384] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ba0 00:08:33.789 [2024-11-27 14:08:11.030535] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:33.789 [2024-11-27 14:08:11.030549] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x617000007780 00:08:33.789 [2024-11-27 14:08:11.030741] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:33.789 14:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.789 14:08:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@356 -- # rpc_cmd bdev_null_resize Base_1 64 00:08:33.789 14:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.789 14:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.789 [2024-11-27 14:08:11.035642] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:33.789 [2024-11-27 14:08:11.035682] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:08:33.789 true 00:08:33.789 14:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:33.789 14:08:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:33.789 14:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:33.789 14:08:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # jq '.[].num_blocks' 00:08:33.789 14:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.789 [2024-11-27 14:08:11.047871] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:34.047 14:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.047 14:08:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@359 -- # blkcnt=65536 00:08:34.047 14:08:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@360 -- # raid_size_mb=32 00:08:34.047 14:08:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@361 -- # '[' 1 -eq 0 ']' 00:08:34.047 14:08:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@364 -- # expected_size=32 00:08:34.047 14:08:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@366 -- # '[' 32 '!=' 32 ']' 00:08:34.047 14:08:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@372 -- # rpc_cmd bdev_null_resize Base_2 64 00:08:34.047 14:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.047 14:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.047 [2024-11-27 14:08:11.095629] bdev_raid.c:2317:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:08:34.047 [2024-11-27 14:08:11.095656] bdev_raid.c:2330:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:08:34.047 [2024-11-27 14:08:11.095719] bdev_raid.c:2344:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 65536 to 131072 00:08:34.047 true 00:08:34.047 14:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.047 14:08:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # rpc_cmd bdev_get_bdevs -b Raid 00:08:34.047 14:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:34.047 14:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.047 14:08:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # jq '.[].num_blocks' 00:08:34.047 [2024-11-27 14:08:11.107884] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:34.048 14:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:34.048 14:08:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@375 -- # blkcnt=131072 00:08:34.048 14:08:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@376 -- # raid_size_mb=64 00:08:34.048 14:08:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@377 -- # '[' 1 -eq 0 ']' 00:08:34.048 14:08:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@380 -- # expected_size=64 00:08:34.048 14:08:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@382 -- # '[' 64 '!=' 64 ']' 00:08:34.048 14:08:11 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@387 -- # killprocess 60468 00:08:34.048 14:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@954 -- # '[' -z 60468 ']' 00:08:34.048 14:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@958 -- # kill -0 60468 00:08:34.048 14:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # uname 00:08:34.048 14:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:34.048 14:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60468 00:08:34.048 14:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:34.048 killing process with pid 60468 00:08:34.048 14:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:34.048 14:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60468' 00:08:34.048 14:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@973 -- # kill 60468 00:08:34.048 [2024-11-27 14:08:11.183948] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:34.048 14:08:11 bdev_raid.raid1_resize_test -- common/autotest_common.sh@978 -- # wait 60468 00:08:34.048 [2024-11-27 14:08:11.184026] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:34.048 [2024-11-27 14:08:11.184574] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:34.048 [2024-11-27 14:08:11.184606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Raid, state offline 00:08:34.048 [2024-11-27 14:08:11.199513] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:34.984 14:08:12 bdev_raid.raid1_resize_test -- bdev/bdev_raid.sh@389 -- # return 0 00:08:34.984 00:08:34.984 real 0m2.392s 00:08:34.984 user 0m2.729s 00:08:34.984 sys 0m0.396s 00:08:34.984 14:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:34.984 ************************************ 00:08:34.984 END TEST raid1_resize_test 00:08:34.984 ************************************ 00:08:34.984 14:08:12 bdev_raid.raid1_resize_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.984 14:08:12 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:08:34.984 14:08:12 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:08:34.984 14:08:12 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:08:34.984 14:08:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:34.984 14:08:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:34.984 14:08:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:34.984 ************************************ 00:08:34.984 START TEST raid_state_function_test 00:08:34.984 ************************************ 00:08:34.984 14:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 false 00:08:34.984 14:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:34.984 14:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:34.984 14:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:08:34.984 14:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:34.984 14:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:34.984 14:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:34.984 14:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:34.984 14:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:35.243 14:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:35.243 14:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:35.243 14:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:35.243 14:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:35.243 14:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:35.243 14:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:35.243 14:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:35.243 14:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:35.243 14:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:35.243 14:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:35.243 14:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:35.243 14:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:35.243 14:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:35.243 14:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:08:35.243 14:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:08:35.243 14:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=60531 00:08:35.243 Process raid pid: 60531 00:08:35.243 14:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60531' 00:08:35.243 14:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:35.243 14:08:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 60531 00:08:35.243 14:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 60531 ']' 00:08:35.243 14:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:35.243 14:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:35.243 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:35.243 14:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:35.243 14:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:35.243 14:08:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:35.243 [2024-11-27 14:08:12.352430] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:08:35.243 [2024-11-27 14:08:12.352607] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:35.502 [2024-11-27 14:08:12.532448] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:35.502 [2024-11-27 14:08:12.679329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:35.761 [2024-11-27 14:08:12.885103] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:35.761 [2024-11-27 14:08:12.885148] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:36.329 14:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:36.329 14:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:08:36.329 14:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:36.329 14:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.329 14:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.329 [2024-11-27 14:08:13.394291] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:36.329 [2024-11-27 14:08:13.394383] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:36.329 [2024-11-27 14:08:13.394400] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:36.329 [2024-11-27 14:08:13.394416] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:36.329 14:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.329 14:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:36.329 14:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.329 14:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.329 14:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:36.329 14:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.329 14:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:36.329 14:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.329 14:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.329 14:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.329 14:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.329 14:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.329 14:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.329 14:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.329 14:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.329 14:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.329 14:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.329 "name": "Existed_Raid", 00:08:36.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.329 "strip_size_kb": 64, 00:08:36.329 "state": "configuring", 00:08:36.329 "raid_level": "raid0", 00:08:36.329 "superblock": false, 00:08:36.329 "num_base_bdevs": 2, 00:08:36.329 "num_base_bdevs_discovered": 0, 00:08:36.329 "num_base_bdevs_operational": 2, 00:08:36.329 "base_bdevs_list": [ 00:08:36.329 { 00:08:36.329 "name": "BaseBdev1", 00:08:36.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.329 "is_configured": false, 00:08:36.329 "data_offset": 0, 00:08:36.329 "data_size": 0 00:08:36.329 }, 00:08:36.329 { 00:08:36.329 "name": "BaseBdev2", 00:08:36.329 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.329 "is_configured": false, 00:08:36.329 "data_offset": 0, 00:08:36.329 "data_size": 0 00:08:36.329 } 00:08:36.329 ] 00:08:36.329 }' 00:08:36.329 14:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.329 14:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.921 14:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:36.921 14:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.921 14:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.921 [2024-11-27 14:08:13.914446] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:36.921 [2024-11-27 14:08:13.914504] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:36.921 14:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.921 14:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:36.921 14:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.921 14:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.921 [2024-11-27 14:08:13.922419] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:36.921 [2024-11-27 14:08:13.922481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:36.921 [2024-11-27 14:08:13.922511] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:36.921 [2024-11-27 14:08:13.922530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:36.921 14:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.921 14:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:36.921 14:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.921 14:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.921 [2024-11-27 14:08:13.967538] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:36.921 BaseBdev1 00:08:36.921 14:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.921 14:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:36.921 14:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:36.921 14:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:36.921 14:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:36.921 14:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:36.921 14:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:36.921 14:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:36.921 14:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.922 14:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.922 14:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.922 14:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:36.922 14:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.922 14:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.922 [ 00:08:36.922 { 00:08:36.922 "name": "BaseBdev1", 00:08:36.922 "aliases": [ 00:08:36.922 "601affe9-0bce-4ebb-9078-4bf84f6cfa0d" 00:08:36.922 ], 00:08:36.922 "product_name": "Malloc disk", 00:08:36.922 "block_size": 512, 00:08:36.922 "num_blocks": 65536, 00:08:36.922 "uuid": "601affe9-0bce-4ebb-9078-4bf84f6cfa0d", 00:08:36.922 "assigned_rate_limits": { 00:08:36.922 "rw_ios_per_sec": 0, 00:08:36.922 "rw_mbytes_per_sec": 0, 00:08:36.922 "r_mbytes_per_sec": 0, 00:08:36.922 "w_mbytes_per_sec": 0 00:08:36.922 }, 00:08:36.922 "claimed": true, 00:08:36.922 "claim_type": "exclusive_write", 00:08:36.922 "zoned": false, 00:08:36.922 "supported_io_types": { 00:08:36.922 "read": true, 00:08:36.922 "write": true, 00:08:36.922 "unmap": true, 00:08:36.922 "flush": true, 00:08:36.922 "reset": true, 00:08:36.922 "nvme_admin": false, 00:08:36.922 "nvme_io": false, 00:08:36.922 "nvme_io_md": false, 00:08:36.922 "write_zeroes": true, 00:08:36.922 "zcopy": true, 00:08:36.922 "get_zone_info": false, 00:08:36.922 "zone_management": false, 00:08:36.922 "zone_append": false, 00:08:36.922 "compare": false, 00:08:36.922 "compare_and_write": false, 00:08:36.922 "abort": true, 00:08:36.922 "seek_hole": false, 00:08:36.922 "seek_data": false, 00:08:36.922 "copy": true, 00:08:36.922 "nvme_iov_md": false 00:08:36.922 }, 00:08:36.922 "memory_domains": [ 00:08:36.922 { 00:08:36.922 "dma_device_id": "system", 00:08:36.922 "dma_device_type": 1 00:08:36.922 }, 00:08:36.922 { 00:08:36.922 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:36.922 "dma_device_type": 2 00:08:36.922 } 00:08:36.922 ], 00:08:36.922 "driver_specific": {} 00:08:36.922 } 00:08:36.922 ] 00:08:36.922 14:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.922 14:08:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:36.922 14:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:36.922 14:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:36.922 14:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:36.922 14:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:36.922 14:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:36.922 14:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:36.922 14:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:36.922 14:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:36.922 14:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:36.922 14:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:36.922 14:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:36.922 14:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.922 14:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.922 14:08:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.922 14:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.922 14:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:36.922 "name": "Existed_Raid", 00:08:36.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.922 "strip_size_kb": 64, 00:08:36.922 "state": "configuring", 00:08:36.922 "raid_level": "raid0", 00:08:36.922 "superblock": false, 00:08:36.922 "num_base_bdevs": 2, 00:08:36.922 "num_base_bdevs_discovered": 1, 00:08:36.922 "num_base_bdevs_operational": 2, 00:08:36.922 "base_bdevs_list": [ 00:08:36.922 { 00:08:36.922 "name": "BaseBdev1", 00:08:36.922 "uuid": "601affe9-0bce-4ebb-9078-4bf84f6cfa0d", 00:08:36.922 "is_configured": true, 00:08:36.922 "data_offset": 0, 00:08:36.922 "data_size": 65536 00:08:36.922 }, 00:08:36.922 { 00:08:36.922 "name": "BaseBdev2", 00:08:36.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.922 "is_configured": false, 00:08:36.922 "data_offset": 0, 00:08:36.922 "data_size": 0 00:08:36.922 } 00:08:36.922 ] 00:08:36.922 }' 00:08:36.922 14:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:36.922 14:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.491 14:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:37.491 14:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.491 14:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.491 [2024-11-27 14:08:14.515755] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:37.491 [2024-11-27 14:08:14.515866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:37.491 14:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.491 14:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:37.491 14:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.491 14:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.491 [2024-11-27 14:08:14.523771] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:37.491 [2024-11-27 14:08:14.526263] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:37.491 [2024-11-27 14:08:14.526344] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:37.491 14:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.491 14:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:37.491 14:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:37.491 14:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:37.491 14:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:37.491 14:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:37.491 14:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:37.491 14:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:37.491 14:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:37.491 14:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:37.491 14:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:37.491 14:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:37.491 14:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:37.491 14:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:37.491 14:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:37.491 14:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.491 14:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:37.491 14:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:37.491 14:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:37.491 "name": "Existed_Raid", 00:08:37.491 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.491 "strip_size_kb": 64, 00:08:37.491 "state": "configuring", 00:08:37.491 "raid_level": "raid0", 00:08:37.491 "superblock": false, 00:08:37.491 "num_base_bdevs": 2, 00:08:37.491 "num_base_bdevs_discovered": 1, 00:08:37.491 "num_base_bdevs_operational": 2, 00:08:37.491 "base_bdevs_list": [ 00:08:37.491 { 00:08:37.491 "name": "BaseBdev1", 00:08:37.491 "uuid": "601affe9-0bce-4ebb-9078-4bf84f6cfa0d", 00:08:37.492 "is_configured": true, 00:08:37.492 "data_offset": 0, 00:08:37.492 "data_size": 65536 00:08:37.492 }, 00:08:37.492 { 00:08:37.492 "name": "BaseBdev2", 00:08:37.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:37.492 "is_configured": false, 00:08:37.492 "data_offset": 0, 00:08:37.492 "data_size": 0 00:08:37.492 } 00:08:37.492 ] 00:08:37.492 }' 00:08:37.492 14:08:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:37.492 14:08:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.061 [2024-11-27 14:08:15.095189] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:38.061 [2024-11-27 14:08:15.095289] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:38.061 [2024-11-27 14:08:15.095303] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:08:38.061 [2024-11-27 14:08:15.095646] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:38.061 [2024-11-27 14:08:15.095879] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:38.061 [2024-11-27 14:08:15.095900] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:38.061 [2024-11-27 14:08:15.096259] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:38.061 BaseBdev2 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.061 [ 00:08:38.061 { 00:08:38.061 "name": "BaseBdev2", 00:08:38.061 "aliases": [ 00:08:38.061 "8950a706-bab4-4fce-8ab2-ae99bdb92fbb" 00:08:38.061 ], 00:08:38.061 "product_name": "Malloc disk", 00:08:38.061 "block_size": 512, 00:08:38.061 "num_blocks": 65536, 00:08:38.061 "uuid": "8950a706-bab4-4fce-8ab2-ae99bdb92fbb", 00:08:38.061 "assigned_rate_limits": { 00:08:38.061 "rw_ios_per_sec": 0, 00:08:38.061 "rw_mbytes_per_sec": 0, 00:08:38.061 "r_mbytes_per_sec": 0, 00:08:38.061 "w_mbytes_per_sec": 0 00:08:38.061 }, 00:08:38.061 "claimed": true, 00:08:38.061 "claim_type": "exclusive_write", 00:08:38.061 "zoned": false, 00:08:38.061 "supported_io_types": { 00:08:38.061 "read": true, 00:08:38.061 "write": true, 00:08:38.061 "unmap": true, 00:08:38.061 "flush": true, 00:08:38.061 "reset": true, 00:08:38.061 "nvme_admin": false, 00:08:38.061 "nvme_io": false, 00:08:38.061 "nvme_io_md": false, 00:08:38.061 "write_zeroes": true, 00:08:38.061 "zcopy": true, 00:08:38.061 "get_zone_info": false, 00:08:38.061 "zone_management": false, 00:08:38.061 "zone_append": false, 00:08:38.061 "compare": false, 00:08:38.061 "compare_and_write": false, 00:08:38.061 "abort": true, 00:08:38.061 "seek_hole": false, 00:08:38.061 "seek_data": false, 00:08:38.061 "copy": true, 00:08:38.061 "nvme_iov_md": false 00:08:38.061 }, 00:08:38.061 "memory_domains": [ 00:08:38.061 { 00:08:38.061 "dma_device_id": "system", 00:08:38.061 "dma_device_type": 1 00:08:38.061 }, 00:08:38.061 { 00:08:38.061 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.061 "dma_device_type": 2 00:08:38.061 } 00:08:38.061 ], 00:08:38.061 "driver_specific": {} 00:08:38.061 } 00:08:38.061 ] 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.061 "name": "Existed_Raid", 00:08:38.061 "uuid": "3b5bdd94-24cf-4eb1-9bd8-286d4a2b5518", 00:08:38.061 "strip_size_kb": 64, 00:08:38.061 "state": "online", 00:08:38.061 "raid_level": "raid0", 00:08:38.061 "superblock": false, 00:08:38.061 "num_base_bdevs": 2, 00:08:38.061 "num_base_bdevs_discovered": 2, 00:08:38.061 "num_base_bdevs_operational": 2, 00:08:38.061 "base_bdevs_list": [ 00:08:38.061 { 00:08:38.061 "name": "BaseBdev1", 00:08:38.061 "uuid": "601affe9-0bce-4ebb-9078-4bf84f6cfa0d", 00:08:38.061 "is_configured": true, 00:08:38.061 "data_offset": 0, 00:08:38.061 "data_size": 65536 00:08:38.061 }, 00:08:38.061 { 00:08:38.061 "name": "BaseBdev2", 00:08:38.061 "uuid": "8950a706-bab4-4fce-8ab2-ae99bdb92fbb", 00:08:38.061 "is_configured": true, 00:08:38.061 "data_offset": 0, 00:08:38.061 "data_size": 65536 00:08:38.061 } 00:08:38.061 ] 00:08:38.061 }' 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.061 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.640 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:38.640 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:38.640 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:38.640 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:38.640 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:38.640 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:38.640 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:38.640 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:38.640 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.640 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.640 [2024-11-27 14:08:15.664059] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:38.640 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.640 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:38.640 "name": "Existed_Raid", 00:08:38.640 "aliases": [ 00:08:38.640 "3b5bdd94-24cf-4eb1-9bd8-286d4a2b5518" 00:08:38.640 ], 00:08:38.640 "product_name": "Raid Volume", 00:08:38.640 "block_size": 512, 00:08:38.640 "num_blocks": 131072, 00:08:38.640 "uuid": "3b5bdd94-24cf-4eb1-9bd8-286d4a2b5518", 00:08:38.640 "assigned_rate_limits": { 00:08:38.640 "rw_ios_per_sec": 0, 00:08:38.640 "rw_mbytes_per_sec": 0, 00:08:38.640 "r_mbytes_per_sec": 0, 00:08:38.640 "w_mbytes_per_sec": 0 00:08:38.640 }, 00:08:38.640 "claimed": false, 00:08:38.640 "zoned": false, 00:08:38.640 "supported_io_types": { 00:08:38.640 "read": true, 00:08:38.640 "write": true, 00:08:38.640 "unmap": true, 00:08:38.640 "flush": true, 00:08:38.640 "reset": true, 00:08:38.640 "nvme_admin": false, 00:08:38.640 "nvme_io": false, 00:08:38.640 "nvme_io_md": false, 00:08:38.640 "write_zeroes": true, 00:08:38.640 "zcopy": false, 00:08:38.640 "get_zone_info": false, 00:08:38.640 "zone_management": false, 00:08:38.640 "zone_append": false, 00:08:38.640 "compare": false, 00:08:38.640 "compare_and_write": false, 00:08:38.640 "abort": false, 00:08:38.640 "seek_hole": false, 00:08:38.640 "seek_data": false, 00:08:38.640 "copy": false, 00:08:38.640 "nvme_iov_md": false 00:08:38.640 }, 00:08:38.640 "memory_domains": [ 00:08:38.640 { 00:08:38.640 "dma_device_id": "system", 00:08:38.640 "dma_device_type": 1 00:08:38.640 }, 00:08:38.640 { 00:08:38.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.640 "dma_device_type": 2 00:08:38.640 }, 00:08:38.640 { 00:08:38.640 "dma_device_id": "system", 00:08:38.640 "dma_device_type": 1 00:08:38.640 }, 00:08:38.640 { 00:08:38.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:38.640 "dma_device_type": 2 00:08:38.640 } 00:08:38.640 ], 00:08:38.640 "driver_specific": { 00:08:38.640 "raid": { 00:08:38.640 "uuid": "3b5bdd94-24cf-4eb1-9bd8-286d4a2b5518", 00:08:38.640 "strip_size_kb": 64, 00:08:38.640 "state": "online", 00:08:38.640 "raid_level": "raid0", 00:08:38.640 "superblock": false, 00:08:38.640 "num_base_bdevs": 2, 00:08:38.640 "num_base_bdevs_discovered": 2, 00:08:38.640 "num_base_bdevs_operational": 2, 00:08:38.640 "base_bdevs_list": [ 00:08:38.640 { 00:08:38.640 "name": "BaseBdev1", 00:08:38.640 "uuid": "601affe9-0bce-4ebb-9078-4bf84f6cfa0d", 00:08:38.640 "is_configured": true, 00:08:38.640 "data_offset": 0, 00:08:38.640 "data_size": 65536 00:08:38.640 }, 00:08:38.640 { 00:08:38.640 "name": "BaseBdev2", 00:08:38.640 "uuid": "8950a706-bab4-4fce-8ab2-ae99bdb92fbb", 00:08:38.640 "is_configured": true, 00:08:38.640 "data_offset": 0, 00:08:38.640 "data_size": 65536 00:08:38.640 } 00:08:38.640 ] 00:08:38.640 } 00:08:38.640 } 00:08:38.640 }' 00:08:38.640 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:38.640 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:38.640 BaseBdev2' 00:08:38.640 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.640 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:38.640 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:38.640 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:38.640 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.640 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.640 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.640 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.640 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:38.640 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:38.640 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:38.640 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:38.640 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.640 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.640 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:38.640 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.900 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:38.900 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:38.900 14:08:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:38.900 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.900 14:08:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.900 [2024-11-27 14:08:15.959690] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:38.900 [2024-11-27 14:08:15.959916] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:38.900 [2024-11-27 14:08:15.960009] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:38.900 14:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.900 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:38.900 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:38.900 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:38.900 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:38.900 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:38.900 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:38.900 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:38.900 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:38.900 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:38.900 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:38.900 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:38.900 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:38.900 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:38.900 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:38.900 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:38.900 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:38.900 14:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:38.900 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:38.900 14:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:38.900 14:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:38.900 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:38.900 "name": "Existed_Raid", 00:08:38.900 "uuid": "3b5bdd94-24cf-4eb1-9bd8-286d4a2b5518", 00:08:38.900 "strip_size_kb": 64, 00:08:38.900 "state": "offline", 00:08:38.900 "raid_level": "raid0", 00:08:38.900 "superblock": false, 00:08:38.900 "num_base_bdevs": 2, 00:08:38.900 "num_base_bdevs_discovered": 1, 00:08:38.900 "num_base_bdevs_operational": 1, 00:08:38.900 "base_bdevs_list": [ 00:08:38.900 { 00:08:38.900 "name": null, 00:08:38.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:38.900 "is_configured": false, 00:08:38.900 "data_offset": 0, 00:08:38.900 "data_size": 65536 00:08:38.900 }, 00:08:38.900 { 00:08:38.900 "name": "BaseBdev2", 00:08:38.900 "uuid": "8950a706-bab4-4fce-8ab2-ae99bdb92fbb", 00:08:38.900 "is_configured": true, 00:08:38.900 "data_offset": 0, 00:08:38.900 "data_size": 65536 00:08:38.900 } 00:08:38.900 ] 00:08:38.900 }' 00:08:38.900 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:38.900 14:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.469 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:39.469 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:39.469 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.469 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:39.469 14:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.469 14:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.469 14:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.469 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:39.469 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:39.469 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:39.469 14:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.469 14:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.469 [2024-11-27 14:08:16.638038] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:39.469 [2024-11-27 14:08:16.638100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:39.469 14:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.469 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:39.469 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:39.469 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:39.469 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:39.469 14:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:39.469 14:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:39.469 14:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:39.729 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:39.729 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:39.729 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:39.729 14:08:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 60531 00:08:39.729 14:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 60531 ']' 00:08:39.729 14:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 60531 00:08:39.729 14:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:08:39.729 14:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.729 14:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60531 00:08:39.729 killing process with pid 60531 00:08:39.729 14:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:39.729 14:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:39.729 14:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60531' 00:08:39.729 14:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 60531 00:08:39.729 [2024-11-27 14:08:16.803278] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:39.729 14:08:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 60531 00:08:39.729 [2024-11-27 14:08:16.818808] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:40.665 14:08:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:08:40.665 00:08:40.665 real 0m5.594s 00:08:40.665 user 0m8.590s 00:08:40.665 sys 0m0.704s 00:08:40.665 14:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.665 14:08:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:40.665 ************************************ 00:08:40.665 END TEST raid_state_function_test 00:08:40.665 ************************************ 00:08:40.665 14:08:17 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:08:40.665 14:08:17 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:40.666 14:08:17 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.666 14:08:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:40.666 ************************************ 00:08:40.666 START TEST raid_state_function_test_sb 00:08:40.666 ************************************ 00:08:40.666 14:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 2 true 00:08:40.666 14:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:08:40.666 14:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:08:40.666 14:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:08:40.666 14:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:08:40.666 14:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:08:40.666 14:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:40.666 14:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:08:40.666 14:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:40.666 14:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:40.666 14:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:08:40.666 14:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:08:40.666 14:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:08:40.666 14:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:40.666 14:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:08:40.666 14:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:08:40.666 14:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:08:40.666 14:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:08:40.666 14:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:08:40.666 14:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:08:40.666 Process raid pid: 60784 00:08:40.666 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:40.666 14:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:08:40.666 14:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:08:40.666 14:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:08:40.666 14:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:08:40.666 14:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=60784 00:08:40.666 14:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 60784' 00:08:40.666 14:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 60784 00:08:40.666 14:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 60784 ']' 00:08:40.666 14:08:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:08:40.666 14:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:40.666 14:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:40.666 14:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:40.666 14:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:40.666 14:08:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:40.925 [2024-11-27 14:08:18.056857] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:08:40.925 [2024-11-27 14:08:18.057347] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:41.183 [2024-11-27 14:08:18.241611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:41.183 [2024-11-27 14:08:18.405521] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:41.443 [2024-11-27 14:08:18.602585] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:41.443 [2024-11-27 14:08:18.602860] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:42.012 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:42.012 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:08:42.012 14:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:42.012 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.012 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.012 [2024-11-27 14:08:19.051565] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:42.012 [2024-11-27 14:08:19.051836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:42.012 [2024-11-27 14:08:19.051865] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:42.012 [2024-11-27 14:08:19.051884] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:42.012 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.012 14:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:42.012 14:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.012 14:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.012 14:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.012 14:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.012 14:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:42.012 14:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.012 14:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.012 14:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.012 14:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.012 14:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.012 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.012 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.012 14:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.012 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.012 14:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.012 "name": "Existed_Raid", 00:08:42.012 "uuid": "95d80197-84f8-4a4d-aa5b-eeea8e925dc8", 00:08:42.012 "strip_size_kb": 64, 00:08:42.012 "state": "configuring", 00:08:42.012 "raid_level": "raid0", 00:08:42.012 "superblock": true, 00:08:42.012 "num_base_bdevs": 2, 00:08:42.012 "num_base_bdevs_discovered": 0, 00:08:42.012 "num_base_bdevs_operational": 2, 00:08:42.012 "base_bdevs_list": [ 00:08:42.012 { 00:08:42.012 "name": "BaseBdev1", 00:08:42.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.012 "is_configured": false, 00:08:42.012 "data_offset": 0, 00:08:42.012 "data_size": 0 00:08:42.012 }, 00:08:42.012 { 00:08:42.012 "name": "BaseBdev2", 00:08:42.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.012 "is_configured": false, 00:08:42.012 "data_offset": 0, 00:08:42.012 "data_size": 0 00:08:42.012 } 00:08:42.012 ] 00:08:42.012 }' 00:08:42.012 14:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.012 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.579 [2024-11-27 14:08:19.615635] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:42.579 [2024-11-27 14:08:19.615856] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.579 [2024-11-27 14:08:19.623630] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:42.579 [2024-11-27 14:08:19.623858] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:42.579 [2024-11-27 14:08:19.623884] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:42.579 [2024-11-27 14:08:19.623905] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.579 [2024-11-27 14:08:19.667327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:42.579 BaseBdev1 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.579 [ 00:08:42.579 { 00:08:42.579 "name": "BaseBdev1", 00:08:42.579 "aliases": [ 00:08:42.579 "39d39bd0-1115-45b4-8c6f-e61063727f58" 00:08:42.579 ], 00:08:42.579 "product_name": "Malloc disk", 00:08:42.579 "block_size": 512, 00:08:42.579 "num_blocks": 65536, 00:08:42.579 "uuid": "39d39bd0-1115-45b4-8c6f-e61063727f58", 00:08:42.579 "assigned_rate_limits": { 00:08:42.579 "rw_ios_per_sec": 0, 00:08:42.579 "rw_mbytes_per_sec": 0, 00:08:42.579 "r_mbytes_per_sec": 0, 00:08:42.579 "w_mbytes_per_sec": 0 00:08:42.579 }, 00:08:42.579 "claimed": true, 00:08:42.579 "claim_type": "exclusive_write", 00:08:42.579 "zoned": false, 00:08:42.579 "supported_io_types": { 00:08:42.579 "read": true, 00:08:42.579 "write": true, 00:08:42.579 "unmap": true, 00:08:42.579 "flush": true, 00:08:42.579 "reset": true, 00:08:42.579 "nvme_admin": false, 00:08:42.579 "nvme_io": false, 00:08:42.579 "nvme_io_md": false, 00:08:42.579 "write_zeroes": true, 00:08:42.579 "zcopy": true, 00:08:42.579 "get_zone_info": false, 00:08:42.579 "zone_management": false, 00:08:42.579 "zone_append": false, 00:08:42.579 "compare": false, 00:08:42.579 "compare_and_write": false, 00:08:42.579 "abort": true, 00:08:42.579 "seek_hole": false, 00:08:42.579 "seek_data": false, 00:08:42.579 "copy": true, 00:08:42.579 "nvme_iov_md": false 00:08:42.579 }, 00:08:42.579 "memory_domains": [ 00:08:42.579 { 00:08:42.579 "dma_device_id": "system", 00:08:42.579 "dma_device_type": 1 00:08:42.579 }, 00:08:42.579 { 00:08:42.579 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:42.579 "dma_device_type": 2 00:08:42.579 } 00:08:42.579 ], 00:08:42.579 "driver_specific": {} 00:08:42.579 } 00:08:42.579 ] 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:42.579 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.580 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:42.580 14:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:42.580 "name": "Existed_Raid", 00:08:42.580 "uuid": "ad2f2956-af5b-4686-aa46-06cf18440935", 00:08:42.580 "strip_size_kb": 64, 00:08:42.580 "state": "configuring", 00:08:42.580 "raid_level": "raid0", 00:08:42.580 "superblock": true, 00:08:42.580 "num_base_bdevs": 2, 00:08:42.580 "num_base_bdevs_discovered": 1, 00:08:42.580 "num_base_bdevs_operational": 2, 00:08:42.580 "base_bdevs_list": [ 00:08:42.580 { 00:08:42.580 "name": "BaseBdev1", 00:08:42.580 "uuid": "39d39bd0-1115-45b4-8c6f-e61063727f58", 00:08:42.580 "is_configured": true, 00:08:42.580 "data_offset": 2048, 00:08:42.580 "data_size": 63488 00:08:42.580 }, 00:08:42.580 { 00:08:42.580 "name": "BaseBdev2", 00:08:42.580 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.580 "is_configured": false, 00:08:42.580 "data_offset": 0, 00:08:42.580 "data_size": 0 00:08:42.580 } 00:08:42.580 ] 00:08:42.580 }' 00:08:42.580 14:08:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:42.580 14:08:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.146 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:08:43.146 14:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.146 14:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.146 [2024-11-27 14:08:20.203532] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:43.146 [2024-11-27 14:08:20.203753] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:08:43.146 14:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.146 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:08:43.146 14:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.146 14:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.146 [2024-11-27 14:08:20.211597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:43.146 [2024-11-27 14:08:20.214158] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:43.146 [2024-11-27 14:08:20.214258] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:43.146 14:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.146 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:08:43.146 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:43.146 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:08:43.146 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.146 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:43.146 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.146 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.146 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:43.146 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.146 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.146 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.146 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.146 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.146 14:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.146 14:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.146 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.146 14:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.146 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.146 "name": "Existed_Raid", 00:08:43.146 "uuid": "b667627d-4676-4402-b379-94c6b9e50cad", 00:08:43.146 "strip_size_kb": 64, 00:08:43.146 "state": "configuring", 00:08:43.146 "raid_level": "raid0", 00:08:43.146 "superblock": true, 00:08:43.146 "num_base_bdevs": 2, 00:08:43.146 "num_base_bdevs_discovered": 1, 00:08:43.146 "num_base_bdevs_operational": 2, 00:08:43.146 "base_bdevs_list": [ 00:08:43.146 { 00:08:43.146 "name": "BaseBdev1", 00:08:43.146 "uuid": "39d39bd0-1115-45b4-8c6f-e61063727f58", 00:08:43.146 "is_configured": true, 00:08:43.146 "data_offset": 2048, 00:08:43.146 "data_size": 63488 00:08:43.146 }, 00:08:43.146 { 00:08:43.146 "name": "BaseBdev2", 00:08:43.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:43.146 "is_configured": false, 00:08:43.146 "data_offset": 0, 00:08:43.146 "data_size": 0 00:08:43.146 } 00:08:43.146 ] 00:08:43.146 }' 00:08:43.146 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.146 14:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.713 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:08:43.713 14:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.713 14:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.713 [2024-11-27 14:08:20.806277] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:43.713 [2024-11-27 14:08:20.806575] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:43.713 [2024-11-27 14:08:20.806594] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:43.713 BaseBdev2 00:08:43.713 [2024-11-27 14:08:20.806996] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:43.713 [2024-11-27 14:08:20.807202] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:43.713 [2024-11-27 14:08:20.807232] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:08:43.713 [2024-11-27 14:08:20.807404] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:43.713 14:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.713 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:08:43.713 14:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:08:43.713 14:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:43.713 14:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:08:43.713 14:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:43.713 14:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:43.713 14:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:08:43.713 14:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.713 14:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.713 14:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.713 14:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:43.713 14:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.713 14:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.713 [ 00:08:43.713 { 00:08:43.713 "name": "BaseBdev2", 00:08:43.713 "aliases": [ 00:08:43.713 "9beb80ff-ed4c-4c7d-bc0f-4612024ecf77" 00:08:43.713 ], 00:08:43.713 "product_name": "Malloc disk", 00:08:43.713 "block_size": 512, 00:08:43.713 "num_blocks": 65536, 00:08:43.713 "uuid": "9beb80ff-ed4c-4c7d-bc0f-4612024ecf77", 00:08:43.713 "assigned_rate_limits": { 00:08:43.713 "rw_ios_per_sec": 0, 00:08:43.713 "rw_mbytes_per_sec": 0, 00:08:43.713 "r_mbytes_per_sec": 0, 00:08:43.713 "w_mbytes_per_sec": 0 00:08:43.713 }, 00:08:43.713 "claimed": true, 00:08:43.713 "claim_type": "exclusive_write", 00:08:43.713 "zoned": false, 00:08:43.713 "supported_io_types": { 00:08:43.713 "read": true, 00:08:43.713 "write": true, 00:08:43.713 "unmap": true, 00:08:43.713 "flush": true, 00:08:43.713 "reset": true, 00:08:43.713 "nvme_admin": false, 00:08:43.713 "nvme_io": false, 00:08:43.713 "nvme_io_md": false, 00:08:43.713 "write_zeroes": true, 00:08:43.713 "zcopy": true, 00:08:43.713 "get_zone_info": false, 00:08:43.713 "zone_management": false, 00:08:43.713 "zone_append": false, 00:08:43.713 "compare": false, 00:08:43.713 "compare_and_write": false, 00:08:43.713 "abort": true, 00:08:43.713 "seek_hole": false, 00:08:43.713 "seek_data": false, 00:08:43.713 "copy": true, 00:08:43.713 "nvme_iov_md": false 00:08:43.713 }, 00:08:43.713 "memory_domains": [ 00:08:43.713 { 00:08:43.713 "dma_device_id": "system", 00:08:43.713 "dma_device_type": 1 00:08:43.713 }, 00:08:43.713 { 00:08:43.713 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.713 "dma_device_type": 2 00:08:43.713 } 00:08:43.713 ], 00:08:43.713 "driver_specific": {} 00:08:43.713 } 00:08:43.713 ] 00:08:43.713 14:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.713 14:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:08:43.713 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:08:43.713 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:08:43.713 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:08:43.713 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:43.713 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:43.713 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:43.713 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:43.713 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:43.713 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:43.713 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:43.713 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:43.713 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:43.713 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:43.713 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.714 14:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:43.714 14:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.714 14:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:43.714 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:43.714 "name": "Existed_Raid", 00:08:43.714 "uuid": "b667627d-4676-4402-b379-94c6b9e50cad", 00:08:43.714 "strip_size_kb": 64, 00:08:43.714 "state": "online", 00:08:43.714 "raid_level": "raid0", 00:08:43.714 "superblock": true, 00:08:43.714 "num_base_bdevs": 2, 00:08:43.714 "num_base_bdevs_discovered": 2, 00:08:43.714 "num_base_bdevs_operational": 2, 00:08:43.714 "base_bdevs_list": [ 00:08:43.714 { 00:08:43.714 "name": "BaseBdev1", 00:08:43.714 "uuid": "39d39bd0-1115-45b4-8c6f-e61063727f58", 00:08:43.714 "is_configured": true, 00:08:43.714 "data_offset": 2048, 00:08:43.714 "data_size": 63488 00:08:43.714 }, 00:08:43.714 { 00:08:43.714 "name": "BaseBdev2", 00:08:43.714 "uuid": "9beb80ff-ed4c-4c7d-bc0f-4612024ecf77", 00:08:43.714 "is_configured": true, 00:08:43.714 "data_offset": 2048, 00:08:43.714 "data_size": 63488 00:08:43.714 } 00:08:43.714 ] 00:08:43.714 }' 00:08:43.714 14:08:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:43.714 14:08:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.282 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:08:44.282 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:08:44.282 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:44.282 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:44.282 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:08:44.282 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:44.282 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:08:44.282 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:44.282 14:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.282 14:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.282 [2024-11-27 14:08:21.398902] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:44.282 14:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.282 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:44.282 "name": "Existed_Raid", 00:08:44.282 "aliases": [ 00:08:44.282 "b667627d-4676-4402-b379-94c6b9e50cad" 00:08:44.282 ], 00:08:44.282 "product_name": "Raid Volume", 00:08:44.282 "block_size": 512, 00:08:44.282 "num_blocks": 126976, 00:08:44.282 "uuid": "b667627d-4676-4402-b379-94c6b9e50cad", 00:08:44.282 "assigned_rate_limits": { 00:08:44.282 "rw_ios_per_sec": 0, 00:08:44.282 "rw_mbytes_per_sec": 0, 00:08:44.282 "r_mbytes_per_sec": 0, 00:08:44.282 "w_mbytes_per_sec": 0 00:08:44.282 }, 00:08:44.282 "claimed": false, 00:08:44.282 "zoned": false, 00:08:44.282 "supported_io_types": { 00:08:44.282 "read": true, 00:08:44.282 "write": true, 00:08:44.282 "unmap": true, 00:08:44.282 "flush": true, 00:08:44.282 "reset": true, 00:08:44.282 "nvme_admin": false, 00:08:44.282 "nvme_io": false, 00:08:44.282 "nvme_io_md": false, 00:08:44.282 "write_zeroes": true, 00:08:44.282 "zcopy": false, 00:08:44.282 "get_zone_info": false, 00:08:44.282 "zone_management": false, 00:08:44.282 "zone_append": false, 00:08:44.282 "compare": false, 00:08:44.282 "compare_and_write": false, 00:08:44.282 "abort": false, 00:08:44.282 "seek_hole": false, 00:08:44.282 "seek_data": false, 00:08:44.282 "copy": false, 00:08:44.282 "nvme_iov_md": false 00:08:44.282 }, 00:08:44.282 "memory_domains": [ 00:08:44.282 { 00:08:44.282 "dma_device_id": "system", 00:08:44.282 "dma_device_type": 1 00:08:44.282 }, 00:08:44.282 { 00:08:44.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.282 "dma_device_type": 2 00:08:44.282 }, 00:08:44.282 { 00:08:44.282 "dma_device_id": "system", 00:08:44.282 "dma_device_type": 1 00:08:44.282 }, 00:08:44.282 { 00:08:44.282 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.282 "dma_device_type": 2 00:08:44.282 } 00:08:44.282 ], 00:08:44.282 "driver_specific": { 00:08:44.282 "raid": { 00:08:44.282 "uuid": "b667627d-4676-4402-b379-94c6b9e50cad", 00:08:44.282 "strip_size_kb": 64, 00:08:44.282 "state": "online", 00:08:44.282 "raid_level": "raid0", 00:08:44.282 "superblock": true, 00:08:44.282 "num_base_bdevs": 2, 00:08:44.282 "num_base_bdevs_discovered": 2, 00:08:44.282 "num_base_bdevs_operational": 2, 00:08:44.282 "base_bdevs_list": [ 00:08:44.282 { 00:08:44.282 "name": "BaseBdev1", 00:08:44.282 "uuid": "39d39bd0-1115-45b4-8c6f-e61063727f58", 00:08:44.282 "is_configured": true, 00:08:44.282 "data_offset": 2048, 00:08:44.282 "data_size": 63488 00:08:44.282 }, 00:08:44.282 { 00:08:44.282 "name": "BaseBdev2", 00:08:44.282 "uuid": "9beb80ff-ed4c-4c7d-bc0f-4612024ecf77", 00:08:44.282 "is_configured": true, 00:08:44.282 "data_offset": 2048, 00:08:44.282 "data_size": 63488 00:08:44.282 } 00:08:44.282 ] 00:08:44.282 } 00:08:44.282 } 00:08:44.282 }' 00:08:44.282 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:44.282 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:08:44.282 BaseBdev2' 00:08:44.282 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.282 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:44.282 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.282 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.282 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:08:44.282 14:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.282 14:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.542 [2024-11-27 14:08:21.650655] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:44.542 [2024-11-27 14:08:21.650698] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:44.542 [2024-11-27 14:08:21.650764] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:44.542 "name": "Existed_Raid", 00:08:44.542 "uuid": "b667627d-4676-4402-b379-94c6b9e50cad", 00:08:44.542 "strip_size_kb": 64, 00:08:44.542 "state": "offline", 00:08:44.542 "raid_level": "raid0", 00:08:44.542 "superblock": true, 00:08:44.542 "num_base_bdevs": 2, 00:08:44.542 "num_base_bdevs_discovered": 1, 00:08:44.542 "num_base_bdevs_operational": 1, 00:08:44.542 "base_bdevs_list": [ 00:08:44.542 { 00:08:44.542 "name": null, 00:08:44.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:44.542 "is_configured": false, 00:08:44.542 "data_offset": 0, 00:08:44.542 "data_size": 63488 00:08:44.542 }, 00:08:44.542 { 00:08:44.542 "name": "BaseBdev2", 00:08:44.542 "uuid": "9beb80ff-ed4c-4c7d-bc0f-4612024ecf77", 00:08:44.542 "is_configured": true, 00:08:44.542 "data_offset": 2048, 00:08:44.542 "data_size": 63488 00:08:44.542 } 00:08:44.542 ] 00:08:44.542 }' 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:44.542 14:08:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.111 14:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:08:45.111 14:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:45.111 14:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.111 14:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:08:45.111 14:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.111 14:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.111 14:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.111 14:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:08:45.111 14:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:45.111 14:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:08:45.111 14:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.111 14:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.111 [2024-11-27 14:08:22.268713] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:45.111 [2024-11-27 14:08:22.268790] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:08:45.111 14:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.111 14:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:08:45.111 14:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:08:45.111 14:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:45.111 14:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:45.111 14:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.111 14:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:08:45.111 14:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:45.373 14:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:08:45.373 14:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:08:45.373 14:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:08:45.373 14:08:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 60784 00:08:45.373 14:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 60784 ']' 00:08:45.373 14:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 60784 00:08:45.373 14:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:08:45.373 14:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:45.373 14:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60784 00:08:45.373 killing process with pid 60784 00:08:45.373 14:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:45.373 14:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:45.373 14:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60784' 00:08:45.373 14:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 60784 00:08:45.373 [2024-11-27 14:08:22.444318] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:45.373 14:08:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 60784 00:08:45.373 [2024-11-27 14:08:22.459412] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:46.311 14:08:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:08:46.311 00:08:46.311 real 0m5.571s 00:08:46.311 user 0m8.451s 00:08:46.311 sys 0m0.794s 00:08:46.311 14:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.311 ************************************ 00:08:46.311 END TEST raid_state_function_test_sb 00:08:46.311 ************************************ 00:08:46.311 14:08:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.311 14:08:23 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:08:46.311 14:08:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:46.311 14:08:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.311 14:08:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:46.311 ************************************ 00:08:46.311 START TEST raid_superblock_test 00:08:46.311 ************************************ 00:08:46.311 14:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 2 00:08:46.311 14:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:08:46.311 14:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:08:46.311 14:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:08:46.311 14:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:08:46.311 14:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:08:46.311 14:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:08:46.311 14:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:08:46.311 14:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:08:46.311 14:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:08:46.311 14:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:08:46.311 14:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:08:46.311 14:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:08:46.311 14:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:08:46.311 14:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:08:46.311 14:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:08:46.311 14:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:08:46.311 14:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=61047 00:08:46.311 14:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 61047 00:08:46.311 14:08:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:08:46.311 14:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 61047 ']' 00:08:46.311 14:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.311 14:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:46.311 14:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.311 14:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:46.311 14:08:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.570 [2024-11-27 14:08:23.639150] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:08:46.570 [2024-11-27 14:08:23.639596] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61047 ] 00:08:46.570 [2024-11-27 14:08:23.823082] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.829 [2024-11-27 14:08:23.947722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.089 [2024-11-27 14:08:24.148409] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.089 [2024-11-27 14:08:24.148465] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.706 14:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:47.706 14:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:08:47.706 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:08:47.706 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:47.706 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:08:47.706 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:08:47.706 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:47.706 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:47.706 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:47.706 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:47.706 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:08:47.706 14:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.706 14:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.706 malloc1 00:08:47.706 14:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.706 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:47.706 14:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.706 14:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.706 [2024-11-27 14:08:24.734067] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:47.706 [2024-11-27 14:08:24.734271] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.706 [2024-11-27 14:08:24.734349] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:08:47.706 [2024-11-27 14:08:24.734643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.706 [2024-11-27 14:08:24.737523] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.706 [2024-11-27 14:08:24.737696] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:47.706 pt1 00:08:47.706 14:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.706 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:47.706 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:47.706 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:08:47.706 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:08:47.706 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:47.706 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:47.706 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:08:47.706 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:47.706 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:08:47.706 14:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.706 14:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.706 malloc2 00:08:47.707 14:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.707 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:47.707 14:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.707 14:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.707 [2024-11-27 14:08:24.788255] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:47.707 [2024-11-27 14:08:24.788331] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:47.707 [2024-11-27 14:08:24.788365] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:08:47.707 [2024-11-27 14:08:24.788378] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:47.707 [2024-11-27 14:08:24.791298] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:47.707 [2024-11-27 14:08:24.791518] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:47.707 pt2 00:08:47.707 14:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.707 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:08:47.707 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:08:47.707 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:08:47.707 14:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.707 14:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.707 [2024-11-27 14:08:24.800327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:47.707 [2024-11-27 14:08:24.802774] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:47.707 [2024-11-27 14:08:24.803003] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:08:47.707 [2024-11-27 14:08:24.803021] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:47.707 [2024-11-27 14:08:24.803328] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:08:47.707 [2024-11-27 14:08:24.803510] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:08:47.707 [2024-11-27 14:08:24.803529] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:08:47.707 [2024-11-27 14:08:24.803698] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:47.707 14:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.707 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:47.707 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:47.707 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:47.707 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:47.707 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:47.707 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:47.707 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:47.707 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:47.707 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:47.707 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:47.707 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:47.707 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:47.707 14:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:47.707 14:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:47.707 14:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:47.707 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:47.707 "name": "raid_bdev1", 00:08:47.707 "uuid": "a143bac9-52cd-4126-8c79-c02daa30ef58", 00:08:47.707 "strip_size_kb": 64, 00:08:47.707 "state": "online", 00:08:47.707 "raid_level": "raid0", 00:08:47.707 "superblock": true, 00:08:47.707 "num_base_bdevs": 2, 00:08:47.707 "num_base_bdevs_discovered": 2, 00:08:47.707 "num_base_bdevs_operational": 2, 00:08:47.707 "base_bdevs_list": [ 00:08:47.707 { 00:08:47.707 "name": "pt1", 00:08:47.707 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:47.707 "is_configured": true, 00:08:47.707 "data_offset": 2048, 00:08:47.707 "data_size": 63488 00:08:47.707 }, 00:08:47.707 { 00:08:47.707 "name": "pt2", 00:08:47.707 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:47.707 "is_configured": true, 00:08:47.707 "data_offset": 2048, 00:08:47.707 "data_size": 63488 00:08:47.707 } 00:08:47.707 ] 00:08:47.707 }' 00:08:47.707 14:08:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:47.707 14:08:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.296 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:08:48.296 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:48.296 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:48.296 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:48.296 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:48.296 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:48.296 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:48.296 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.296 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.296 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:48.296 [2024-11-27 14:08:25.316897] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:48.296 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.296 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:48.296 "name": "raid_bdev1", 00:08:48.296 "aliases": [ 00:08:48.296 "a143bac9-52cd-4126-8c79-c02daa30ef58" 00:08:48.296 ], 00:08:48.296 "product_name": "Raid Volume", 00:08:48.296 "block_size": 512, 00:08:48.296 "num_blocks": 126976, 00:08:48.296 "uuid": "a143bac9-52cd-4126-8c79-c02daa30ef58", 00:08:48.296 "assigned_rate_limits": { 00:08:48.296 "rw_ios_per_sec": 0, 00:08:48.296 "rw_mbytes_per_sec": 0, 00:08:48.296 "r_mbytes_per_sec": 0, 00:08:48.296 "w_mbytes_per_sec": 0 00:08:48.296 }, 00:08:48.296 "claimed": false, 00:08:48.296 "zoned": false, 00:08:48.296 "supported_io_types": { 00:08:48.296 "read": true, 00:08:48.296 "write": true, 00:08:48.296 "unmap": true, 00:08:48.296 "flush": true, 00:08:48.296 "reset": true, 00:08:48.296 "nvme_admin": false, 00:08:48.296 "nvme_io": false, 00:08:48.296 "nvme_io_md": false, 00:08:48.296 "write_zeroes": true, 00:08:48.296 "zcopy": false, 00:08:48.296 "get_zone_info": false, 00:08:48.296 "zone_management": false, 00:08:48.296 "zone_append": false, 00:08:48.296 "compare": false, 00:08:48.296 "compare_and_write": false, 00:08:48.296 "abort": false, 00:08:48.296 "seek_hole": false, 00:08:48.296 "seek_data": false, 00:08:48.296 "copy": false, 00:08:48.296 "nvme_iov_md": false 00:08:48.296 }, 00:08:48.296 "memory_domains": [ 00:08:48.296 { 00:08:48.296 "dma_device_id": "system", 00:08:48.296 "dma_device_type": 1 00:08:48.296 }, 00:08:48.296 { 00:08:48.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.296 "dma_device_type": 2 00:08:48.296 }, 00:08:48.296 { 00:08:48.296 "dma_device_id": "system", 00:08:48.296 "dma_device_type": 1 00:08:48.296 }, 00:08:48.296 { 00:08:48.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:48.296 "dma_device_type": 2 00:08:48.296 } 00:08:48.296 ], 00:08:48.296 "driver_specific": { 00:08:48.296 "raid": { 00:08:48.296 "uuid": "a143bac9-52cd-4126-8c79-c02daa30ef58", 00:08:48.296 "strip_size_kb": 64, 00:08:48.296 "state": "online", 00:08:48.296 "raid_level": "raid0", 00:08:48.296 "superblock": true, 00:08:48.296 "num_base_bdevs": 2, 00:08:48.296 "num_base_bdevs_discovered": 2, 00:08:48.296 "num_base_bdevs_operational": 2, 00:08:48.296 "base_bdevs_list": [ 00:08:48.296 { 00:08:48.296 "name": "pt1", 00:08:48.296 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:48.296 "is_configured": true, 00:08:48.296 "data_offset": 2048, 00:08:48.296 "data_size": 63488 00:08:48.296 }, 00:08:48.296 { 00:08:48.296 "name": "pt2", 00:08:48.296 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:48.296 "is_configured": true, 00:08:48.296 "data_offset": 2048, 00:08:48.296 "data_size": 63488 00:08:48.296 } 00:08:48.296 ] 00:08:48.296 } 00:08:48.296 } 00:08:48.296 }' 00:08:48.296 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:48.296 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:48.296 pt2' 00:08:48.296 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.296 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:48.296 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:48.296 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:48.296 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.296 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.296 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.296 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.296 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:48.296 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:48.296 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:48.296 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:48.296 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.296 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.296 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:48.297 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.556 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:48.556 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:48.556 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:48.556 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.556 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.556 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:08:48.556 [2024-11-27 14:08:25.588932] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:48.556 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.556 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a143bac9-52cd-4126-8c79-c02daa30ef58 00:08:48.556 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z a143bac9-52cd-4126-8c79-c02daa30ef58 ']' 00:08:48.556 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:48.556 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.556 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.556 [2024-11-27 14:08:25.644539] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:48.556 [2024-11-27 14:08:25.644730] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:48.556 [2024-11-27 14:08:25.644970] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:48.556 [2024-11-27 14:08:25.645138] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:48.556 [2024-11-27 14:08:25.645272] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:08:48.556 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.556 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.556 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.556 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.556 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:08:48.556 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.556 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:08:48.556 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:08:48.556 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:48.556 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:08:48.556 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.556 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.556 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.556 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:08:48.556 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:08:48.556 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.556 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.556 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.556 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:08:48.556 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.556 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:48.556 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.557 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.557 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:08:48.557 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:48.557 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:08:48.557 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:48.557 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:08:48.557 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.557 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:08:48.557 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:48.557 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:08:48.557 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.557 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.557 [2024-11-27 14:08:25.780595] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:48.557 [2024-11-27 14:08:25.783207] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:48.557 [2024-11-27 14:08:25.783311] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:48.557 [2024-11-27 14:08:25.783396] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:48.557 [2024-11-27 14:08:25.783432] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:48.557 [2024-11-27 14:08:25.783450] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:08:48.557 request: 00:08:48.557 { 00:08:48.557 "name": "raid_bdev1", 00:08:48.557 "raid_level": "raid0", 00:08:48.557 "base_bdevs": [ 00:08:48.557 "malloc1", 00:08:48.557 "malloc2" 00:08:48.557 ], 00:08:48.557 "strip_size_kb": 64, 00:08:48.557 "superblock": false, 00:08:48.557 "method": "bdev_raid_create", 00:08:48.557 "req_id": 1 00:08:48.557 } 00:08:48.557 Got JSON-RPC error response 00:08:48.557 response: 00:08:48.557 { 00:08:48.557 "code": -17, 00:08:48.557 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:48.557 } 00:08:48.557 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:48.557 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:08:48.557 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:48.557 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:48.557 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:48.557 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.557 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.557 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.557 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:08:48.557 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.816 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:08:48.816 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:08:48.816 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:48.816 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.816 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.816 [2024-11-27 14:08:25.848647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:48.816 [2024-11-27 14:08:25.848753] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.816 [2024-11-27 14:08:25.848779] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:08:48.816 [2024-11-27 14:08:25.848810] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.816 [2024-11-27 14:08:25.851640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.816 [2024-11-27 14:08:25.851689] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:48.816 [2024-11-27 14:08:25.851831] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:48.816 [2024-11-27 14:08:25.851906] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:48.816 pt1 00:08:48.816 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.816 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:08:48.816 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:48.816 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:08:48.816 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:48.816 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:48.816 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:48.816 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:48.816 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:48.816 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:48.816 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:48.816 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:48.816 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:48.816 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:48.816 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:48.816 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:48.816 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:48.816 "name": "raid_bdev1", 00:08:48.816 "uuid": "a143bac9-52cd-4126-8c79-c02daa30ef58", 00:08:48.816 "strip_size_kb": 64, 00:08:48.816 "state": "configuring", 00:08:48.816 "raid_level": "raid0", 00:08:48.816 "superblock": true, 00:08:48.816 "num_base_bdevs": 2, 00:08:48.816 "num_base_bdevs_discovered": 1, 00:08:48.816 "num_base_bdevs_operational": 2, 00:08:48.816 "base_bdevs_list": [ 00:08:48.816 { 00:08:48.816 "name": "pt1", 00:08:48.816 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:48.816 "is_configured": true, 00:08:48.816 "data_offset": 2048, 00:08:48.816 "data_size": 63488 00:08:48.816 }, 00:08:48.816 { 00:08:48.816 "name": null, 00:08:48.816 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:48.816 "is_configured": false, 00:08:48.816 "data_offset": 2048, 00:08:48.816 "data_size": 63488 00:08:48.816 } 00:08:48.816 ] 00:08:48.816 }' 00:08:48.816 14:08:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:48.816 14:08:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.384 14:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:08:49.384 14:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:08:49.384 14:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:49.384 14:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:49.384 14:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.384 14:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.384 [2024-11-27 14:08:26.372801] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:49.384 [2024-11-27 14:08:26.372912] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:49.384 [2024-11-27 14:08:26.372943] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:08:49.384 [2024-11-27 14:08:26.372960] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:49.384 [2024-11-27 14:08:26.373524] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:49.384 [2024-11-27 14:08:26.373570] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:49.384 [2024-11-27 14:08:26.373694] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:49.384 [2024-11-27 14:08:26.373735] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:49.384 [2024-11-27 14:08:26.373906] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:49.384 [2024-11-27 14:08:26.373928] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:49.384 [2024-11-27 14:08:26.374231] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:08:49.384 [2024-11-27 14:08:26.374409] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:49.384 [2024-11-27 14:08:26.374424] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:49.384 [2024-11-27 14:08:26.374586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:49.384 pt2 00:08:49.384 14:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.384 14:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:08:49.384 14:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:08:49.384 14:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:49.384 14:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:49.384 14:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:49.384 14:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:49.384 14:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:49.384 14:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:49.384 14:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:49.384 14:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:49.384 14:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:49.384 14:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:49.384 14:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:49.384 14:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:49.384 14:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.384 14:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.384 14:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.384 14:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:49.384 "name": "raid_bdev1", 00:08:49.384 "uuid": "a143bac9-52cd-4126-8c79-c02daa30ef58", 00:08:49.384 "strip_size_kb": 64, 00:08:49.384 "state": "online", 00:08:49.384 "raid_level": "raid0", 00:08:49.384 "superblock": true, 00:08:49.384 "num_base_bdevs": 2, 00:08:49.384 "num_base_bdevs_discovered": 2, 00:08:49.384 "num_base_bdevs_operational": 2, 00:08:49.384 "base_bdevs_list": [ 00:08:49.384 { 00:08:49.384 "name": "pt1", 00:08:49.384 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:49.384 "is_configured": true, 00:08:49.384 "data_offset": 2048, 00:08:49.384 "data_size": 63488 00:08:49.384 }, 00:08:49.384 { 00:08:49.384 "name": "pt2", 00:08:49.384 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:49.384 "is_configured": true, 00:08:49.384 "data_offset": 2048, 00:08:49.384 "data_size": 63488 00:08:49.384 } 00:08:49.384 ] 00:08:49.384 }' 00:08:49.384 14:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:49.384 14:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.643 14:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:08:49.643 14:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:08:49.643 14:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:08:49.643 14:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:08:49.643 14:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:08:49.643 14:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:08:49.643 14:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:49.643 14:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:08:49.643 14:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.643 14:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.643 [2024-11-27 14:08:26.905331] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:49.903 14:08:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.903 14:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:08:49.903 "name": "raid_bdev1", 00:08:49.903 "aliases": [ 00:08:49.903 "a143bac9-52cd-4126-8c79-c02daa30ef58" 00:08:49.903 ], 00:08:49.903 "product_name": "Raid Volume", 00:08:49.903 "block_size": 512, 00:08:49.903 "num_blocks": 126976, 00:08:49.903 "uuid": "a143bac9-52cd-4126-8c79-c02daa30ef58", 00:08:49.903 "assigned_rate_limits": { 00:08:49.903 "rw_ios_per_sec": 0, 00:08:49.903 "rw_mbytes_per_sec": 0, 00:08:49.903 "r_mbytes_per_sec": 0, 00:08:49.903 "w_mbytes_per_sec": 0 00:08:49.903 }, 00:08:49.903 "claimed": false, 00:08:49.903 "zoned": false, 00:08:49.903 "supported_io_types": { 00:08:49.903 "read": true, 00:08:49.903 "write": true, 00:08:49.903 "unmap": true, 00:08:49.903 "flush": true, 00:08:49.903 "reset": true, 00:08:49.903 "nvme_admin": false, 00:08:49.903 "nvme_io": false, 00:08:49.903 "nvme_io_md": false, 00:08:49.903 "write_zeroes": true, 00:08:49.903 "zcopy": false, 00:08:49.903 "get_zone_info": false, 00:08:49.903 "zone_management": false, 00:08:49.903 "zone_append": false, 00:08:49.903 "compare": false, 00:08:49.903 "compare_and_write": false, 00:08:49.903 "abort": false, 00:08:49.903 "seek_hole": false, 00:08:49.903 "seek_data": false, 00:08:49.903 "copy": false, 00:08:49.903 "nvme_iov_md": false 00:08:49.903 }, 00:08:49.903 "memory_domains": [ 00:08:49.903 { 00:08:49.903 "dma_device_id": "system", 00:08:49.903 "dma_device_type": 1 00:08:49.903 }, 00:08:49.903 { 00:08:49.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.903 "dma_device_type": 2 00:08:49.903 }, 00:08:49.903 { 00:08:49.903 "dma_device_id": "system", 00:08:49.903 "dma_device_type": 1 00:08:49.903 }, 00:08:49.903 { 00:08:49.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.903 "dma_device_type": 2 00:08:49.903 } 00:08:49.903 ], 00:08:49.903 "driver_specific": { 00:08:49.903 "raid": { 00:08:49.903 "uuid": "a143bac9-52cd-4126-8c79-c02daa30ef58", 00:08:49.903 "strip_size_kb": 64, 00:08:49.903 "state": "online", 00:08:49.903 "raid_level": "raid0", 00:08:49.903 "superblock": true, 00:08:49.903 "num_base_bdevs": 2, 00:08:49.903 "num_base_bdevs_discovered": 2, 00:08:49.903 "num_base_bdevs_operational": 2, 00:08:49.903 "base_bdevs_list": [ 00:08:49.903 { 00:08:49.903 "name": "pt1", 00:08:49.903 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:49.903 "is_configured": true, 00:08:49.903 "data_offset": 2048, 00:08:49.903 "data_size": 63488 00:08:49.903 }, 00:08:49.903 { 00:08:49.903 "name": "pt2", 00:08:49.903 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:49.903 "is_configured": true, 00:08:49.903 "data_offset": 2048, 00:08:49.903 "data_size": 63488 00:08:49.903 } 00:08:49.903 ] 00:08:49.903 } 00:08:49.903 } 00:08:49.903 }' 00:08:49.903 14:08:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:49.903 14:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:08:49.903 pt2' 00:08:49.903 14:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.903 14:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:08:49.903 14:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.903 14:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:08:49.903 14:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.903 14:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.903 14:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.903 14:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.903 14:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.903 14:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.903 14:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:08:49.903 14:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:08:49.903 14:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.903 14:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:08:49.903 14:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.903 14:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.903 14:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:08:49.903 14:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:08:49.903 14:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:08:49.903 14:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:08:49.903 14:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.903 14:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.903 [2024-11-27 14:08:27.173423] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:50.163 14:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.163 14:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' a143bac9-52cd-4126-8c79-c02daa30ef58 '!=' a143bac9-52cd-4126-8c79-c02daa30ef58 ']' 00:08:50.163 14:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:08:50.163 14:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:50.163 14:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:50.163 14:08:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 61047 00:08:50.163 14:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 61047 ']' 00:08:50.163 14:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 61047 00:08:50.163 14:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:08:50.163 14:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:50.163 14:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61047 00:08:50.163 killing process with pid 61047 00:08:50.163 14:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:50.163 14:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:50.163 14:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61047' 00:08:50.163 14:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 61047 00:08:50.163 [2024-11-27 14:08:27.254458] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:50.163 14:08:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 61047 00:08:50.163 [2024-11-27 14:08:27.254558] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:50.163 [2024-11-27 14:08:27.254645] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:50.163 [2024-11-27 14:08:27.254666] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:50.431 [2024-11-27 14:08:27.443336] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:51.367 14:08:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:08:51.367 00:08:51.367 real 0m4.960s 00:08:51.367 user 0m7.343s 00:08:51.367 sys 0m0.701s 00:08:51.367 14:08:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:51.367 14:08:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.367 ************************************ 00:08:51.367 END TEST raid_superblock_test 00:08:51.367 ************************************ 00:08:51.367 14:08:28 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:08:51.367 14:08:28 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:51.367 14:08:28 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:51.367 14:08:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:51.367 ************************************ 00:08:51.367 START TEST raid_read_error_test 00:08:51.367 ************************************ 00:08:51.367 14:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 read 00:08:51.367 14:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:51.367 14:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:51.367 14:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:08:51.367 14:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:51.367 14:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:51.367 14:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:51.367 14:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:51.367 14:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:51.367 14:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:51.367 14:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:51.367 14:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:51.367 14:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:51.367 14:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:51.367 14:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:51.367 14:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:51.367 14:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:51.367 14:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:51.368 14:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:51.368 14:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:51.368 14:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:51.368 14:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:51.368 14:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:51.368 14:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.RMrRhJnoTr 00:08:51.368 14:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61258 00:08:51.368 14:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61258 00:08:51.368 14:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 61258 ']' 00:08:51.368 14:08:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:51.368 14:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:51.368 14:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:51.368 14:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:51.368 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:51.368 14:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:51.368 14:08:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:51.627 [2024-11-27 14:08:28.658868] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:08:51.627 [2024-11-27 14:08:28.659564] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61258 ] 00:08:51.627 [2024-11-27 14:08:28.838741] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:51.886 [2024-11-27 14:08:28.967015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:52.146 [2024-11-27 14:08:29.167760] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.146 [2024-11-27 14:08:29.167885] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:52.405 14:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:52.405 14:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:52.405 14:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:52.405 14:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:52.405 14:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.405 14:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.405 BaseBdev1_malloc 00:08:52.405 14:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.405 14:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:52.405 14:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.405 14:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.405 true 00:08:52.405 14:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.405 14:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:52.405 14:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.405 14:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.405 [2024-11-27 14:08:29.668920] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:52.405 [2024-11-27 14:08:29.668988] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.405 [2024-11-27 14:08:29.669019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:52.405 [2024-11-27 14:08:29.669037] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.405 [2024-11-27 14:08:29.672017] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.405 [2024-11-27 14:08:29.672068] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:52.405 BaseBdev1 00:08:52.405 14:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.405 14:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:52.405 14:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:52.405 14:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.405 14:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.667 BaseBdev2_malloc 00:08:52.668 14:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.668 14:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:52.668 14:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.668 14:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.668 true 00:08:52.668 14:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.668 14:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:52.668 14:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.668 14:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.668 [2024-11-27 14:08:29.733799] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:52.668 [2024-11-27 14:08:29.733914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.668 [2024-11-27 14:08:29.733942] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:52.668 [2024-11-27 14:08:29.733959] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.668 [2024-11-27 14:08:29.736824] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.668 [2024-11-27 14:08:29.736865] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:52.668 BaseBdev2 00:08:52.668 14:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.668 14:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:52.668 14:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.668 14:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.668 [2024-11-27 14:08:29.741836] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:52.668 [2024-11-27 14:08:29.744253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:52.668 [2024-11-27 14:08:29.744523] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:52.668 [2024-11-27 14:08:29.744549] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:52.668 [2024-11-27 14:08:29.744918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:52.668 [2024-11-27 14:08:29.745167] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:52.668 [2024-11-27 14:08:29.745203] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:52.668 [2024-11-27 14:08:29.745410] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:52.668 14:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.668 14:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:52.668 14:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:52.668 14:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:52.668 14:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:52.668 14:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:52.668 14:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:52.668 14:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:52.668 14:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:52.668 14:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:52.668 14:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:52.668 14:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:52.668 14:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:52.668 14:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:52.668 14:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:52.668 14:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:52.668 14:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:52.668 "name": "raid_bdev1", 00:08:52.668 "uuid": "702ca812-f48f-43e6-9051-672be49188a0", 00:08:52.668 "strip_size_kb": 64, 00:08:52.668 "state": "online", 00:08:52.668 "raid_level": "raid0", 00:08:52.668 "superblock": true, 00:08:52.668 "num_base_bdevs": 2, 00:08:52.668 "num_base_bdevs_discovered": 2, 00:08:52.668 "num_base_bdevs_operational": 2, 00:08:52.668 "base_bdevs_list": [ 00:08:52.668 { 00:08:52.668 "name": "BaseBdev1", 00:08:52.668 "uuid": "8e1c7b0a-fdaf-5077-8a2c-4f49922828ba", 00:08:52.668 "is_configured": true, 00:08:52.668 "data_offset": 2048, 00:08:52.668 "data_size": 63488 00:08:52.668 }, 00:08:52.668 { 00:08:52.668 "name": "BaseBdev2", 00:08:52.668 "uuid": "adad6bd0-39c4-59d1-a640-f9762995fe11", 00:08:52.668 "is_configured": true, 00:08:52.668 "data_offset": 2048, 00:08:52.668 "data_size": 63488 00:08:52.668 } 00:08:52.668 ] 00:08:52.668 }' 00:08:52.668 14:08:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:52.668 14:08:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.235 14:08:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:53.235 14:08:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:53.235 [2024-11-27 14:08:30.415527] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:54.181 14:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:54.181 14:08:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.181 14:08:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.181 14:08:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.181 14:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:54.181 14:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:54.181 14:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:54.181 14:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:54.181 14:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:54.181 14:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:54.181 14:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:54.181 14:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:54.181 14:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:54.181 14:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:54.181 14:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:54.181 14:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:54.181 14:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:54.181 14:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:54.181 14:08:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.181 14:08:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.181 14:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:54.181 14:08:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.181 14:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:54.181 "name": "raid_bdev1", 00:08:54.181 "uuid": "702ca812-f48f-43e6-9051-672be49188a0", 00:08:54.181 "strip_size_kb": 64, 00:08:54.181 "state": "online", 00:08:54.181 "raid_level": "raid0", 00:08:54.181 "superblock": true, 00:08:54.181 "num_base_bdevs": 2, 00:08:54.181 "num_base_bdevs_discovered": 2, 00:08:54.181 "num_base_bdevs_operational": 2, 00:08:54.181 "base_bdevs_list": [ 00:08:54.181 { 00:08:54.181 "name": "BaseBdev1", 00:08:54.181 "uuid": "8e1c7b0a-fdaf-5077-8a2c-4f49922828ba", 00:08:54.181 "is_configured": true, 00:08:54.181 "data_offset": 2048, 00:08:54.181 "data_size": 63488 00:08:54.181 }, 00:08:54.181 { 00:08:54.181 "name": "BaseBdev2", 00:08:54.181 "uuid": "adad6bd0-39c4-59d1-a640-f9762995fe11", 00:08:54.181 "is_configured": true, 00:08:54.181 "data_offset": 2048, 00:08:54.181 "data_size": 63488 00:08:54.181 } 00:08:54.181 ] 00:08:54.181 }' 00:08:54.181 14:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:54.181 14:08:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.751 14:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:54.751 14:08:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:54.751 14:08:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.751 [2024-11-27 14:08:31.805676] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:54.751 [2024-11-27 14:08:31.805716] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:54.751 [2024-11-27 14:08:31.809504] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:54.751 [2024-11-27 14:08:31.809717] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:54.751 [2024-11-27 14:08:31.809852] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:54.751 [2024-11-27 14:08:31.810143] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:54.751 { 00:08:54.751 "results": [ 00:08:54.751 { 00:08:54.751 "job": "raid_bdev1", 00:08:54.751 "core_mask": "0x1", 00:08:54.751 "workload": "randrw", 00:08:54.751 "percentage": 50, 00:08:54.751 "status": "finished", 00:08:54.751 "queue_depth": 1, 00:08:54.751 "io_size": 131072, 00:08:54.751 "runtime": 1.38764, 00:08:54.751 "iops": 10093.395981666714, 00:08:54.751 "mibps": 1261.6744977083392, 00:08:54.751 "io_failed": 1, 00:08:54.751 "io_timeout": 0, 00:08:54.751 "avg_latency_us": 137.30488729661144, 00:08:54.751 "min_latency_us": 38.63272727272727, 00:08:54.751 "max_latency_us": 1995.8690909090908 00:08:54.751 } 00:08:54.751 ], 00:08:54.751 "core_count": 1 00:08:54.751 } 00:08:54.751 14:08:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:54.751 14:08:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61258 00:08:54.751 14:08:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 61258 ']' 00:08:54.751 14:08:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 61258 00:08:54.751 14:08:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:08:54.751 14:08:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:54.751 14:08:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61258 00:08:54.751 killing process with pid 61258 00:08:54.751 14:08:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:54.751 14:08:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:54.751 14:08:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61258' 00:08:54.751 14:08:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 61258 00:08:54.751 [2024-11-27 14:08:31.849445] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:54.751 14:08:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 61258 00:08:54.751 [2024-11-27 14:08:31.970308] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:56.130 14:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.RMrRhJnoTr 00:08:56.130 14:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:08:56.130 14:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:08:56.130 14:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.72 00:08:56.130 14:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:08:56.130 ************************************ 00:08:56.130 END TEST raid_read_error_test 00:08:56.130 ************************************ 00:08:56.130 14:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:08:56.130 14:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:08:56.130 14:08:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.72 != \0\.\0\0 ]] 00:08:56.130 00:08:56.130 real 0m4.533s 00:08:56.130 user 0m5.652s 00:08:56.130 sys 0m0.578s 00:08:56.130 14:08:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:56.130 14:08:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.130 14:08:33 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:08:56.130 14:08:33 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:08:56.130 14:08:33 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:56.130 14:08:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:56.130 ************************************ 00:08:56.130 START TEST raid_write_error_test 00:08:56.130 ************************************ 00:08:56.130 14:08:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 2 write 00:08:56.130 14:08:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:08:56.130 14:08:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:08:56.130 14:08:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:08:56.130 14:08:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:08:56.130 14:08:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:56.130 14:08:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:08:56.130 14:08:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:56.130 14:08:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:56.130 14:08:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:08:56.130 14:08:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:08:56.130 14:08:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:08:56.130 14:08:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:56.130 14:08:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:08:56.130 14:08:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:08:56.130 14:08:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:08:56.130 14:08:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:08:56.130 14:08:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:08:56.130 14:08:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:08:56.130 14:08:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:08:56.130 14:08:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:08:56.130 14:08:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:08:56.130 14:08:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:08:56.130 14:08:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.XqEctvcaOr 00:08:56.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:56.130 14:08:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=61404 00:08:56.130 14:08:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 61404 00:08:56.130 14:08:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 61404 ']' 00:08:56.130 14:08:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:56.130 14:08:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:56.130 14:08:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:56.130 14:08:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:56.130 14:08:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:56.130 14:08:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.130 [2024-11-27 14:08:33.251079] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:08:56.130 [2024-11-27 14:08:33.251256] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61404 ] 00:08:56.389 [2024-11-27 14:08:33.436932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:56.389 [2024-11-27 14:08:33.569184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:56.647 [2024-11-27 14:08:33.779036] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:56.647 [2024-11-27 14:08:33.779483] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:57.215 14:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:57.215 14:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.216 BaseBdev1_malloc 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.216 true 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.216 [2024-11-27 14:08:34.338282] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:57.216 [2024-11-27 14:08:34.338374] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.216 [2024-11-27 14:08:34.338404] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:08:57.216 [2024-11-27 14:08:34.338423] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.216 [2024-11-27 14:08:34.341290] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.216 [2024-11-27 14:08:34.341356] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:57.216 BaseBdev1 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.216 BaseBdev2_malloc 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.216 true 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.216 [2024-11-27 14:08:34.395285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:57.216 [2024-11-27 14:08:34.395553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.216 [2024-11-27 14:08:34.395591] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:08:57.216 [2024-11-27 14:08:34.395611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.216 [2024-11-27 14:08:34.398473] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.216 [2024-11-27 14:08:34.398536] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:57.216 BaseBdev2 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.216 [2024-11-27 14:08:34.403512] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:57.216 [2024-11-27 14:08:34.405986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:57.216 [2024-11-27 14:08:34.406246] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:08:57.216 [2024-11-27 14:08:34.406271] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:57.216 [2024-11-27 14:08:34.406538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:08:57.216 [2024-11-27 14:08:34.406779] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:08:57.216 [2024-11-27 14:08:34.406824] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:08:57.216 [2024-11-27 14:08:34.407020] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:57.216 "name": "raid_bdev1", 00:08:57.216 "uuid": "6d283b04-1c8b-4fbe-84c4-de8af27142d7", 00:08:57.216 "strip_size_kb": 64, 00:08:57.216 "state": "online", 00:08:57.216 "raid_level": "raid0", 00:08:57.216 "superblock": true, 00:08:57.216 "num_base_bdevs": 2, 00:08:57.216 "num_base_bdevs_discovered": 2, 00:08:57.216 "num_base_bdevs_operational": 2, 00:08:57.216 "base_bdevs_list": [ 00:08:57.216 { 00:08:57.216 "name": "BaseBdev1", 00:08:57.216 "uuid": "81d6f6c7-d7ac-55e0-96b7-e0174f310d62", 00:08:57.216 "is_configured": true, 00:08:57.216 "data_offset": 2048, 00:08:57.216 "data_size": 63488 00:08:57.216 }, 00:08:57.216 { 00:08:57.216 "name": "BaseBdev2", 00:08:57.216 "uuid": "d09cb0e1-42c4-58ac-acaf-62c99d1f69cd", 00:08:57.216 "is_configured": true, 00:08:57.216 "data_offset": 2048, 00:08:57.216 "data_size": 63488 00:08:57.216 } 00:08:57.216 ] 00:08:57.216 }' 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:57.216 14:08:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.785 14:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:08:57.785 14:08:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:08:58.044 [2024-11-27 14:08:35.105158] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:08:58.981 14:08:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:58.981 14:08:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.981 14:08:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.981 14:08:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.981 14:08:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:08:58.981 14:08:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:08:58.981 14:08:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:08:58.981 14:08:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:08:58.981 14:08:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:08:58.981 14:08:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:08:58.981 14:08:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:08:58.981 14:08:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:08:58.981 14:08:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:08:58.981 14:08:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:08:58.981 14:08:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:08:58.981 14:08:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:08:58.981 14:08:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:08:58.981 14:08:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:08:58.981 14:08:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:58.981 14:08:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:58.981 14:08:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:58.981 14:08:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:58.981 14:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:08:58.981 "name": "raid_bdev1", 00:08:58.981 "uuid": "6d283b04-1c8b-4fbe-84c4-de8af27142d7", 00:08:58.981 "strip_size_kb": 64, 00:08:58.981 "state": "online", 00:08:58.981 "raid_level": "raid0", 00:08:58.981 "superblock": true, 00:08:58.981 "num_base_bdevs": 2, 00:08:58.981 "num_base_bdevs_discovered": 2, 00:08:58.981 "num_base_bdevs_operational": 2, 00:08:58.981 "base_bdevs_list": [ 00:08:58.981 { 00:08:58.981 "name": "BaseBdev1", 00:08:58.981 "uuid": "81d6f6c7-d7ac-55e0-96b7-e0174f310d62", 00:08:58.981 "is_configured": true, 00:08:58.981 "data_offset": 2048, 00:08:58.981 "data_size": 63488 00:08:58.981 }, 00:08:58.981 { 00:08:58.981 "name": "BaseBdev2", 00:08:58.981 "uuid": "d09cb0e1-42c4-58ac-acaf-62c99d1f69cd", 00:08:58.981 "is_configured": true, 00:08:58.981 "data_offset": 2048, 00:08:58.981 "data_size": 63488 00:08:58.981 } 00:08:58.981 ] 00:08:58.981 }' 00:08:58.981 14:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:08:58.981 14:08:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.239 14:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:08:59.239 14:08:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:59.239 14:08:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.239 [2024-11-27 14:08:36.462100] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:59.239 [2024-11-27 14:08:36.462143] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:59.239 [2024-11-27 14:08:36.465580] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:59.239 [2024-11-27 14:08:36.465636] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:59.239 [2024-11-27 14:08:36.465679] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:59.239 [2024-11-27 14:08:36.465714] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:08:59.239 { 00:08:59.239 "results": [ 00:08:59.239 { 00:08:59.239 "job": "raid_bdev1", 00:08:59.239 "core_mask": "0x1", 00:08:59.239 "workload": "randrw", 00:08:59.239 "percentage": 50, 00:08:59.239 "status": "finished", 00:08:59.239 "queue_depth": 1, 00:08:59.239 "io_size": 131072, 00:08:59.239 "runtime": 1.354697, 00:08:59.239 "iops": 10039.145284886583, 00:08:59.239 "mibps": 1254.8931606108229, 00:08:59.239 "io_failed": 1, 00:08:59.239 "io_timeout": 0, 00:08:59.239 "avg_latency_us": 139.03171932545067, 00:08:59.239 "min_latency_us": 37.93454545454546, 00:08:59.239 "max_latency_us": 1802.24 00:08:59.239 } 00:08:59.239 ], 00:08:59.239 "core_count": 1 00:08:59.239 } 00:08:59.239 14:08:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:59.239 14:08:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 61404 00:08:59.239 14:08:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 61404 ']' 00:08:59.239 14:08:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 61404 00:08:59.239 14:08:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:08:59.239 14:08:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:59.239 14:08:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61404 00:08:59.239 killing process with pid 61404 00:08:59.239 14:08:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:59.239 14:08:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:59.239 14:08:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61404' 00:08:59.239 14:08:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 61404 00:08:59.239 [2024-11-27 14:08:36.504700] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:59.239 14:08:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 61404 00:08:59.496 [2024-11-27 14:08:36.626659] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:00.872 14:08:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.XqEctvcaOr 00:09:00.872 14:08:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:00.872 14:08:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:00.872 14:08:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.74 00:09:00.872 14:08:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:09:00.872 14:08:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:00.872 14:08:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:00.872 14:08:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.74 != \0\.\0\0 ]] 00:09:00.872 00:09:00.872 real 0m4.660s 00:09:00.872 user 0m5.851s 00:09:00.872 sys 0m0.591s 00:09:00.872 14:08:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.872 ************************************ 00:09:00.872 END TEST raid_write_error_test 00:09:00.872 ************************************ 00:09:00.872 14:08:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.872 14:08:37 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:00.872 14:08:37 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:09:00.872 14:08:37 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:00.872 14:08:37 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.872 14:08:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:00.872 ************************************ 00:09:00.872 START TEST raid_state_function_test 00:09:00.872 ************************************ 00:09:00.872 14:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 false 00:09:00.872 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:00.872 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:00.872 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:00.872 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:00.872 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:00.872 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:00.872 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:00.872 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:00.872 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:00.872 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:00.872 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:00.872 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:00.872 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:00.872 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:00.872 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:00.872 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:00.872 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:00.872 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:00.872 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:00.872 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:00.872 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:00.872 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:00.872 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:00.872 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=61548 00:09:00.872 Process raid pid: 61548 00:09:00.872 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:00.872 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61548' 00:09:00.872 14:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 61548 00:09:00.872 14:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 61548 ']' 00:09:00.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.873 14:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.873 14:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:00.873 14:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.873 14:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:00.873 14:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.873 [2024-11-27 14:08:37.937764] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:09:00.873 [2024-11-27 14:08:37.937934] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.873 [2024-11-27 14:08:38.116438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:01.132 [2024-11-27 14:08:38.253811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:01.390 [2024-11-27 14:08:38.472563] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:01.390 [2024-11-27 14:08:38.472608] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:01.958 14:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:01.958 14:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:01.958 14:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:01.958 14:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.958 14:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.958 [2024-11-27 14:08:38.950588] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:01.958 [2024-11-27 14:08:38.950684] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:01.958 [2024-11-27 14:08:38.950703] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:01.958 [2024-11-27 14:08:38.950721] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:01.958 14:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.958 14:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:01.958 14:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:01.958 14:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:01.958 14:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:01.958 14:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:01.958 14:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:01.958 14:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:01.958 14:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:01.958 14:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:01.958 14:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:01.958 14:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:01.958 14:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:01.958 14:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:01.958 14:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:01.958 14:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:01.958 14:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:01.958 "name": "Existed_Raid", 00:09:01.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.958 "strip_size_kb": 64, 00:09:01.958 "state": "configuring", 00:09:01.958 "raid_level": "concat", 00:09:01.958 "superblock": false, 00:09:01.958 "num_base_bdevs": 2, 00:09:01.958 "num_base_bdevs_discovered": 0, 00:09:01.958 "num_base_bdevs_operational": 2, 00:09:01.958 "base_bdevs_list": [ 00:09:01.958 { 00:09:01.958 "name": "BaseBdev1", 00:09:01.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.958 "is_configured": false, 00:09:01.958 "data_offset": 0, 00:09:01.958 "data_size": 0 00:09:01.958 }, 00:09:01.958 { 00:09:01.958 "name": "BaseBdev2", 00:09:01.958 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:01.958 "is_configured": false, 00:09:01.958 "data_offset": 0, 00:09:01.958 "data_size": 0 00:09:01.958 } 00:09:01.958 ] 00:09:01.958 }' 00:09:01.958 14:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:01.958 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.216 14:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:02.216 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.216 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.474 [2024-11-27 14:08:39.498701] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:02.474 [2024-11-27 14:08:39.498749] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:02.474 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.474 14:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:02.474 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.474 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.474 [2024-11-27 14:08:39.510710] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:02.474 [2024-11-27 14:08:39.510792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:02.474 [2024-11-27 14:08:39.510811] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:02.474 [2024-11-27 14:08:39.510830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:02.474 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.474 14:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:02.474 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.474 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.474 [2024-11-27 14:08:39.555602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:02.474 BaseBdev1 00:09:02.474 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.474 14:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:02.474 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:02.474 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:02.474 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:02.475 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:02.475 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:02.475 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:02.475 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.475 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.475 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.475 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:02.475 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.475 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.475 [ 00:09:02.475 { 00:09:02.475 "name": "BaseBdev1", 00:09:02.475 "aliases": [ 00:09:02.475 "914f4778-d59c-4ef2-b470-d9c71be11534" 00:09:02.475 ], 00:09:02.475 "product_name": "Malloc disk", 00:09:02.475 "block_size": 512, 00:09:02.475 "num_blocks": 65536, 00:09:02.475 "uuid": "914f4778-d59c-4ef2-b470-d9c71be11534", 00:09:02.475 "assigned_rate_limits": { 00:09:02.475 "rw_ios_per_sec": 0, 00:09:02.475 "rw_mbytes_per_sec": 0, 00:09:02.475 "r_mbytes_per_sec": 0, 00:09:02.475 "w_mbytes_per_sec": 0 00:09:02.475 }, 00:09:02.475 "claimed": true, 00:09:02.475 "claim_type": "exclusive_write", 00:09:02.475 "zoned": false, 00:09:02.475 "supported_io_types": { 00:09:02.475 "read": true, 00:09:02.475 "write": true, 00:09:02.475 "unmap": true, 00:09:02.475 "flush": true, 00:09:02.475 "reset": true, 00:09:02.475 "nvme_admin": false, 00:09:02.475 "nvme_io": false, 00:09:02.475 "nvme_io_md": false, 00:09:02.475 "write_zeroes": true, 00:09:02.475 "zcopy": true, 00:09:02.475 "get_zone_info": false, 00:09:02.475 "zone_management": false, 00:09:02.475 "zone_append": false, 00:09:02.475 "compare": false, 00:09:02.475 "compare_and_write": false, 00:09:02.475 "abort": true, 00:09:02.475 "seek_hole": false, 00:09:02.475 "seek_data": false, 00:09:02.475 "copy": true, 00:09:02.475 "nvme_iov_md": false 00:09:02.475 }, 00:09:02.475 "memory_domains": [ 00:09:02.475 { 00:09:02.475 "dma_device_id": "system", 00:09:02.475 "dma_device_type": 1 00:09:02.475 }, 00:09:02.475 { 00:09:02.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:02.475 "dma_device_type": 2 00:09:02.475 } 00:09:02.475 ], 00:09:02.475 "driver_specific": {} 00:09:02.475 } 00:09:02.475 ] 00:09:02.475 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.475 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:02.475 14:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:02.475 14:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:02.475 14:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:02.475 14:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:02.475 14:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:02.475 14:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:02.475 14:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:02.475 14:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:02.475 14:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:02.475 14:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:02.475 14:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:02.475 14:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:02.475 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.475 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:02.475 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.475 14:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:02.475 "name": "Existed_Raid", 00:09:02.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.475 "strip_size_kb": 64, 00:09:02.475 "state": "configuring", 00:09:02.475 "raid_level": "concat", 00:09:02.475 "superblock": false, 00:09:02.475 "num_base_bdevs": 2, 00:09:02.475 "num_base_bdevs_discovered": 1, 00:09:02.475 "num_base_bdevs_operational": 2, 00:09:02.475 "base_bdevs_list": [ 00:09:02.475 { 00:09:02.475 "name": "BaseBdev1", 00:09:02.475 "uuid": "914f4778-d59c-4ef2-b470-d9c71be11534", 00:09:02.475 "is_configured": true, 00:09:02.475 "data_offset": 0, 00:09:02.475 "data_size": 65536 00:09:02.475 }, 00:09:02.475 { 00:09:02.475 "name": "BaseBdev2", 00:09:02.475 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:02.475 "is_configured": false, 00:09:02.475 "data_offset": 0, 00:09:02.475 "data_size": 0 00:09:02.475 } 00:09:02.475 ] 00:09:02.475 }' 00:09:02.475 14:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:02.475 14:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.042 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:03.042 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.042 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.042 [2024-11-27 14:08:40.171901] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:03.042 [2024-11-27 14:08:40.171963] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:03.042 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.042 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:03.042 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.042 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.042 [2024-11-27 14:08:40.183942] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:03.042 [2024-11-27 14:08:40.186519] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:03.042 [2024-11-27 14:08:40.186733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:03.042 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.042 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:03.042 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:03.042 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:03.042 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.042 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:03.042 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.042 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.042 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:03.042 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.042 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.042 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.042 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.042 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.042 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.042 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.042 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.042 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.042 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.042 "name": "Existed_Raid", 00:09:03.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.042 "strip_size_kb": 64, 00:09:03.042 "state": "configuring", 00:09:03.042 "raid_level": "concat", 00:09:03.042 "superblock": false, 00:09:03.042 "num_base_bdevs": 2, 00:09:03.042 "num_base_bdevs_discovered": 1, 00:09:03.042 "num_base_bdevs_operational": 2, 00:09:03.042 "base_bdevs_list": [ 00:09:03.042 { 00:09:03.042 "name": "BaseBdev1", 00:09:03.042 "uuid": "914f4778-d59c-4ef2-b470-d9c71be11534", 00:09:03.042 "is_configured": true, 00:09:03.042 "data_offset": 0, 00:09:03.042 "data_size": 65536 00:09:03.042 }, 00:09:03.042 { 00:09:03.042 "name": "BaseBdev2", 00:09:03.042 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:03.042 "is_configured": false, 00:09:03.042 "data_offset": 0, 00:09:03.042 "data_size": 0 00:09:03.042 } 00:09:03.042 ] 00:09:03.042 }' 00:09:03.042 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.042 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.616 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:03.616 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.616 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.616 [2024-11-27 14:08:40.722703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:03.616 [2024-11-27 14:08:40.722766] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:03.616 [2024-11-27 14:08:40.722780] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:09:03.616 [2024-11-27 14:08:40.723198] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:03.616 [2024-11-27 14:08:40.723422] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:03.616 [2024-11-27 14:08:40.723443] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:03.616 [2024-11-27 14:08:40.723811] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:03.616 BaseBdev2 00:09:03.616 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.616 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:03.616 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:03.617 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:03.617 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:03.617 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:03.617 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:03.617 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:03.617 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.617 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.617 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.617 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:03.617 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.617 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.617 [ 00:09:03.617 { 00:09:03.617 "name": "BaseBdev2", 00:09:03.617 "aliases": [ 00:09:03.617 "36641089-a133-4cbc-9fe1-9ec6047f55b1" 00:09:03.617 ], 00:09:03.617 "product_name": "Malloc disk", 00:09:03.617 "block_size": 512, 00:09:03.617 "num_blocks": 65536, 00:09:03.617 "uuid": "36641089-a133-4cbc-9fe1-9ec6047f55b1", 00:09:03.617 "assigned_rate_limits": { 00:09:03.617 "rw_ios_per_sec": 0, 00:09:03.617 "rw_mbytes_per_sec": 0, 00:09:03.617 "r_mbytes_per_sec": 0, 00:09:03.617 "w_mbytes_per_sec": 0 00:09:03.617 }, 00:09:03.617 "claimed": true, 00:09:03.617 "claim_type": "exclusive_write", 00:09:03.617 "zoned": false, 00:09:03.617 "supported_io_types": { 00:09:03.617 "read": true, 00:09:03.617 "write": true, 00:09:03.617 "unmap": true, 00:09:03.617 "flush": true, 00:09:03.617 "reset": true, 00:09:03.617 "nvme_admin": false, 00:09:03.617 "nvme_io": false, 00:09:03.617 "nvme_io_md": false, 00:09:03.617 "write_zeroes": true, 00:09:03.617 "zcopy": true, 00:09:03.617 "get_zone_info": false, 00:09:03.617 "zone_management": false, 00:09:03.617 "zone_append": false, 00:09:03.617 "compare": false, 00:09:03.617 "compare_and_write": false, 00:09:03.617 "abort": true, 00:09:03.617 "seek_hole": false, 00:09:03.617 "seek_data": false, 00:09:03.617 "copy": true, 00:09:03.617 "nvme_iov_md": false 00:09:03.617 }, 00:09:03.617 "memory_domains": [ 00:09:03.617 { 00:09:03.617 "dma_device_id": "system", 00:09:03.617 "dma_device_type": 1 00:09:03.617 }, 00:09:03.617 { 00:09:03.617 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:03.617 "dma_device_type": 2 00:09:03.617 } 00:09:03.617 ], 00:09:03.617 "driver_specific": {} 00:09:03.617 } 00:09:03.617 ] 00:09:03.617 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.617 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:03.617 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:03.617 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:03.617 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:03.617 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:03.617 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:03.617 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:03.617 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:03.617 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:03.617 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:03.617 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:03.617 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:03.617 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:03.617 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:03.617 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:03.617 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.617 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:03.617 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:03.617 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:03.617 "name": "Existed_Raid", 00:09:03.617 "uuid": "2c750088-b123-4aa6-b256-305657644c06", 00:09:03.617 "strip_size_kb": 64, 00:09:03.617 "state": "online", 00:09:03.617 "raid_level": "concat", 00:09:03.617 "superblock": false, 00:09:03.617 "num_base_bdevs": 2, 00:09:03.617 "num_base_bdevs_discovered": 2, 00:09:03.617 "num_base_bdevs_operational": 2, 00:09:03.617 "base_bdevs_list": [ 00:09:03.617 { 00:09:03.617 "name": "BaseBdev1", 00:09:03.617 "uuid": "914f4778-d59c-4ef2-b470-d9c71be11534", 00:09:03.617 "is_configured": true, 00:09:03.617 "data_offset": 0, 00:09:03.617 "data_size": 65536 00:09:03.617 }, 00:09:03.617 { 00:09:03.617 "name": "BaseBdev2", 00:09:03.617 "uuid": "36641089-a133-4cbc-9fe1-9ec6047f55b1", 00:09:03.617 "is_configured": true, 00:09:03.617 "data_offset": 0, 00:09:03.617 "data_size": 65536 00:09:03.617 } 00:09:03.617 ] 00:09:03.617 }' 00:09:03.617 14:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:03.617 14:08:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.207 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:04.207 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:04.207 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:04.207 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:04.207 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:04.207 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:04.207 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:04.207 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:04.207 14:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.207 14:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.207 [2024-11-27 14:08:41.239523] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:04.207 14:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.207 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:04.207 "name": "Existed_Raid", 00:09:04.207 "aliases": [ 00:09:04.207 "2c750088-b123-4aa6-b256-305657644c06" 00:09:04.207 ], 00:09:04.207 "product_name": "Raid Volume", 00:09:04.207 "block_size": 512, 00:09:04.207 "num_blocks": 131072, 00:09:04.207 "uuid": "2c750088-b123-4aa6-b256-305657644c06", 00:09:04.207 "assigned_rate_limits": { 00:09:04.207 "rw_ios_per_sec": 0, 00:09:04.207 "rw_mbytes_per_sec": 0, 00:09:04.207 "r_mbytes_per_sec": 0, 00:09:04.207 "w_mbytes_per_sec": 0 00:09:04.207 }, 00:09:04.207 "claimed": false, 00:09:04.207 "zoned": false, 00:09:04.207 "supported_io_types": { 00:09:04.207 "read": true, 00:09:04.207 "write": true, 00:09:04.207 "unmap": true, 00:09:04.207 "flush": true, 00:09:04.207 "reset": true, 00:09:04.207 "nvme_admin": false, 00:09:04.207 "nvme_io": false, 00:09:04.207 "nvme_io_md": false, 00:09:04.207 "write_zeroes": true, 00:09:04.207 "zcopy": false, 00:09:04.207 "get_zone_info": false, 00:09:04.207 "zone_management": false, 00:09:04.207 "zone_append": false, 00:09:04.207 "compare": false, 00:09:04.207 "compare_and_write": false, 00:09:04.207 "abort": false, 00:09:04.207 "seek_hole": false, 00:09:04.207 "seek_data": false, 00:09:04.207 "copy": false, 00:09:04.207 "nvme_iov_md": false 00:09:04.207 }, 00:09:04.207 "memory_domains": [ 00:09:04.207 { 00:09:04.207 "dma_device_id": "system", 00:09:04.207 "dma_device_type": 1 00:09:04.207 }, 00:09:04.207 { 00:09:04.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.207 "dma_device_type": 2 00:09:04.207 }, 00:09:04.207 { 00:09:04.207 "dma_device_id": "system", 00:09:04.207 "dma_device_type": 1 00:09:04.207 }, 00:09:04.207 { 00:09:04.207 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:04.207 "dma_device_type": 2 00:09:04.207 } 00:09:04.207 ], 00:09:04.207 "driver_specific": { 00:09:04.207 "raid": { 00:09:04.207 "uuid": "2c750088-b123-4aa6-b256-305657644c06", 00:09:04.207 "strip_size_kb": 64, 00:09:04.207 "state": "online", 00:09:04.207 "raid_level": "concat", 00:09:04.207 "superblock": false, 00:09:04.207 "num_base_bdevs": 2, 00:09:04.207 "num_base_bdevs_discovered": 2, 00:09:04.207 "num_base_bdevs_operational": 2, 00:09:04.207 "base_bdevs_list": [ 00:09:04.207 { 00:09:04.207 "name": "BaseBdev1", 00:09:04.207 "uuid": "914f4778-d59c-4ef2-b470-d9c71be11534", 00:09:04.207 "is_configured": true, 00:09:04.207 "data_offset": 0, 00:09:04.207 "data_size": 65536 00:09:04.207 }, 00:09:04.207 { 00:09:04.207 "name": "BaseBdev2", 00:09:04.207 "uuid": "36641089-a133-4cbc-9fe1-9ec6047f55b1", 00:09:04.207 "is_configured": true, 00:09:04.207 "data_offset": 0, 00:09:04.207 "data_size": 65536 00:09:04.207 } 00:09:04.207 ] 00:09:04.207 } 00:09:04.207 } 00:09:04.207 }' 00:09:04.208 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:04.208 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:04.208 BaseBdev2' 00:09:04.208 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.208 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:04.208 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.208 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:04.208 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.208 14:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.208 14:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.208 14:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.208 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.208 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.208 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:04.208 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:04.208 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:04.208 14:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.208 14:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.208 14:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.468 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:04.468 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:04.468 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:04.468 14:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.468 14:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.468 [2024-11-27 14:08:41.495047] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:04.468 [2024-11-27 14:08:41.495212] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:04.468 [2024-11-27 14:08:41.495301] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:04.468 14:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.468 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:04.468 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:04.468 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:04.468 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:04.468 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:04.468 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:04.468 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:04.468 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:04.468 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:04.468 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:04.468 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:04.468 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:04.468 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:04.468 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:04.468 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:04.468 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:04.468 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:04.468 14:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:04.468 14:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:04.468 14:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:04.468 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:04.468 "name": "Existed_Raid", 00:09:04.468 "uuid": "2c750088-b123-4aa6-b256-305657644c06", 00:09:04.468 "strip_size_kb": 64, 00:09:04.468 "state": "offline", 00:09:04.468 "raid_level": "concat", 00:09:04.468 "superblock": false, 00:09:04.468 "num_base_bdevs": 2, 00:09:04.468 "num_base_bdevs_discovered": 1, 00:09:04.468 "num_base_bdevs_operational": 1, 00:09:04.468 "base_bdevs_list": [ 00:09:04.468 { 00:09:04.468 "name": null, 00:09:04.468 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:04.468 "is_configured": false, 00:09:04.468 "data_offset": 0, 00:09:04.468 "data_size": 65536 00:09:04.468 }, 00:09:04.468 { 00:09:04.468 "name": "BaseBdev2", 00:09:04.468 "uuid": "36641089-a133-4cbc-9fe1-9ec6047f55b1", 00:09:04.468 "is_configured": true, 00:09:04.468 "data_offset": 0, 00:09:04.468 "data_size": 65536 00:09:04.468 } 00:09:04.468 ] 00:09:04.468 }' 00:09:04.468 14:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:04.468 14:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.066 14:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:05.066 14:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:05.066 14:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.066 14:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.066 14:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.066 14:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:05.066 14:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.066 14:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:05.066 14:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:05.066 14:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:05.066 14:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.066 14:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.066 [2024-11-27 14:08:42.172113] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:05.066 [2024-11-27 14:08:42.172214] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:05.066 14:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.066 14:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:05.066 14:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:05.066 14:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:05.066 14:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:05.066 14:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:05.066 14:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.066 14:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:05.066 14:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:05.066 14:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:05.066 14:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:05.066 14:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 61548 00:09:05.066 14:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 61548 ']' 00:09:05.066 14:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 61548 00:09:05.066 14:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:05.066 14:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:05.066 14:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61548 00:09:05.325 killing process with pid 61548 00:09:05.325 14:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:05.325 14:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:05.325 14:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61548' 00:09:05.325 14:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 61548 00:09:05.325 [2024-11-27 14:08:42.356166] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:05.325 14:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 61548 00:09:05.325 [2024-11-27 14:08:42.372041] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:06.260 14:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:06.260 00:09:06.260 real 0m5.649s 00:09:06.260 user 0m8.489s 00:09:06.260 sys 0m0.797s 00:09:06.260 14:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:06.260 14:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.260 ************************************ 00:09:06.260 END TEST raid_state_function_test 00:09:06.260 ************************************ 00:09:06.519 14:08:43 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:09:06.519 14:08:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:06.519 14:08:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:06.519 14:08:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:06.519 ************************************ 00:09:06.519 START TEST raid_state_function_test_sb 00:09:06.519 ************************************ 00:09:06.519 14:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 2 true 00:09:06.519 14:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:09:06.519 14:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:06.519 14:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:06.519 14:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:06.519 14:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:06.519 14:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:06.519 14:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:06.519 14:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:06.519 14:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:06.519 14:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:06.519 14:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:06.519 14:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:06.519 14:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:06.519 14:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:06.519 14:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:06.519 14:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:06.519 14:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:06.519 14:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:06.519 Process raid pid: 61806 00:09:06.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.519 14:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:09:06.519 14:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:06.519 14:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:06.519 14:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:06.519 14:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:06.519 14:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=61806 00:09:06.519 14:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 61806' 00:09:06.519 14:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:06.519 14:08:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 61806 00:09:06.519 14:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 61806 ']' 00:09:06.519 14:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.519 14:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:06.519 14:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.519 14:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:06.519 14:08:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:06.519 [2024-11-27 14:08:43.666045] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:09:06.519 [2024-11-27 14:08:43.666535] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.777 [2024-11-27 14:08:43.858840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.777 [2024-11-27 14:08:43.998644] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:07.035 [2024-11-27 14:08:44.219554] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:07.035 [2024-11-27 14:08:44.219902] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:07.599 14:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:07.599 14:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:07.599 14:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:07.599 14:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.599 14:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.599 [2024-11-27 14:08:44.654762] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:07.599 [2024-11-27 14:08:44.654977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:07.599 [2024-11-27 14:08:44.655124] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:07.600 [2024-11-27 14:08:44.655308] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:07.600 14:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.600 14:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:07.600 14:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:07.600 14:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:07.600 14:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:07.600 14:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:07.600 14:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:07.600 14:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:07.600 14:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:07.600 14:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:07.600 14:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:07.600 14:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:07.600 14:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:07.600 14:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:07.600 14:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:07.600 14:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:07.600 14:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:07.600 "name": "Existed_Raid", 00:09:07.600 "uuid": "4b0a79a8-6eee-4664-8997-941da7e49189", 00:09:07.600 "strip_size_kb": 64, 00:09:07.600 "state": "configuring", 00:09:07.600 "raid_level": "concat", 00:09:07.600 "superblock": true, 00:09:07.600 "num_base_bdevs": 2, 00:09:07.600 "num_base_bdevs_discovered": 0, 00:09:07.600 "num_base_bdevs_operational": 2, 00:09:07.600 "base_bdevs_list": [ 00:09:07.600 { 00:09:07.600 "name": "BaseBdev1", 00:09:07.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.600 "is_configured": false, 00:09:07.600 "data_offset": 0, 00:09:07.600 "data_size": 0 00:09:07.600 }, 00:09:07.600 { 00:09:07.600 "name": "BaseBdev2", 00:09:07.600 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:07.600 "is_configured": false, 00:09:07.600 "data_offset": 0, 00:09:07.600 "data_size": 0 00:09:07.600 } 00:09:07.600 ] 00:09:07.600 }' 00:09:07.600 14:08:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:07.600 14:08:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.165 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:08.165 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.165 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.165 [2024-11-27 14:08:45.166895] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:08.165 [2024-11-27 14:08:45.166951] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:08.165 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.165 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:08.165 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.165 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.165 [2024-11-27 14:08:45.174884] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:08.165 [2024-11-27 14:08:45.174935] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:08.165 [2024-11-27 14:08:45.174970] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:08.165 [2024-11-27 14:08:45.174988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:08.165 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.165 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:08.165 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.165 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.165 [2024-11-27 14:08:45.222762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:08.165 BaseBdev1 00:09:08.165 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.165 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:08.165 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:08.165 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:08.165 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:08.165 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:08.165 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:08.165 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:08.165 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.165 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.165 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.165 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:08.165 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.165 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.165 [ 00:09:08.165 { 00:09:08.165 "name": "BaseBdev1", 00:09:08.165 "aliases": [ 00:09:08.165 "a7f35a93-b655-409d-9782-cdf7148f0d1f" 00:09:08.165 ], 00:09:08.165 "product_name": "Malloc disk", 00:09:08.165 "block_size": 512, 00:09:08.165 "num_blocks": 65536, 00:09:08.165 "uuid": "a7f35a93-b655-409d-9782-cdf7148f0d1f", 00:09:08.165 "assigned_rate_limits": { 00:09:08.165 "rw_ios_per_sec": 0, 00:09:08.165 "rw_mbytes_per_sec": 0, 00:09:08.165 "r_mbytes_per_sec": 0, 00:09:08.165 "w_mbytes_per_sec": 0 00:09:08.165 }, 00:09:08.165 "claimed": true, 00:09:08.165 "claim_type": "exclusive_write", 00:09:08.165 "zoned": false, 00:09:08.165 "supported_io_types": { 00:09:08.165 "read": true, 00:09:08.165 "write": true, 00:09:08.165 "unmap": true, 00:09:08.165 "flush": true, 00:09:08.165 "reset": true, 00:09:08.165 "nvme_admin": false, 00:09:08.165 "nvme_io": false, 00:09:08.165 "nvme_io_md": false, 00:09:08.165 "write_zeroes": true, 00:09:08.165 "zcopy": true, 00:09:08.165 "get_zone_info": false, 00:09:08.165 "zone_management": false, 00:09:08.165 "zone_append": false, 00:09:08.165 "compare": false, 00:09:08.165 "compare_and_write": false, 00:09:08.165 "abort": true, 00:09:08.166 "seek_hole": false, 00:09:08.166 "seek_data": false, 00:09:08.166 "copy": true, 00:09:08.166 "nvme_iov_md": false 00:09:08.166 }, 00:09:08.166 "memory_domains": [ 00:09:08.166 { 00:09:08.166 "dma_device_id": "system", 00:09:08.166 "dma_device_type": 1 00:09:08.166 }, 00:09:08.166 { 00:09:08.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:08.166 "dma_device_type": 2 00:09:08.166 } 00:09:08.166 ], 00:09:08.166 "driver_specific": {} 00:09:08.166 } 00:09:08.166 ] 00:09:08.166 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.166 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:08.166 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:08.166 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.166 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.166 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.166 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.166 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:08.166 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.166 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.166 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.166 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.166 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.166 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.166 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.166 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.166 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.166 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.166 "name": "Existed_Raid", 00:09:08.166 "uuid": "47a698b1-c607-48ef-99ef-ea415c6dd860", 00:09:08.166 "strip_size_kb": 64, 00:09:08.166 "state": "configuring", 00:09:08.166 "raid_level": "concat", 00:09:08.166 "superblock": true, 00:09:08.166 "num_base_bdevs": 2, 00:09:08.166 "num_base_bdevs_discovered": 1, 00:09:08.166 "num_base_bdevs_operational": 2, 00:09:08.166 "base_bdevs_list": [ 00:09:08.166 { 00:09:08.166 "name": "BaseBdev1", 00:09:08.166 "uuid": "a7f35a93-b655-409d-9782-cdf7148f0d1f", 00:09:08.166 "is_configured": true, 00:09:08.166 "data_offset": 2048, 00:09:08.166 "data_size": 63488 00:09:08.166 }, 00:09:08.166 { 00:09:08.166 "name": "BaseBdev2", 00:09:08.166 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.166 "is_configured": false, 00:09:08.166 "data_offset": 0, 00:09:08.166 "data_size": 0 00:09:08.166 } 00:09:08.166 ] 00:09:08.166 }' 00:09:08.166 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.166 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.780 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:08.780 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.780 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.780 [2024-11-27 14:08:45.790988] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:08.780 [2024-11-27 14:08:45.791189] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:08.780 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.780 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:08.780 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.780 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.780 [2024-11-27 14:08:45.799054] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:08.780 [2024-11-27 14:08:45.801607] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:08.780 [2024-11-27 14:08:45.801672] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:08.780 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.780 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:08.780 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:08.780 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:09:08.780 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:08.780 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:08.780 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:08.780 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:08.780 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:08.780 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:08.780 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:08.780 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:08.780 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:08.780 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:08.780 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:08.780 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.780 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:08.780 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.780 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:08.780 "name": "Existed_Raid", 00:09:08.780 "uuid": "e01ddff2-6730-423b-9637-59e2d17900be", 00:09:08.780 "strip_size_kb": 64, 00:09:08.780 "state": "configuring", 00:09:08.780 "raid_level": "concat", 00:09:08.780 "superblock": true, 00:09:08.780 "num_base_bdevs": 2, 00:09:08.780 "num_base_bdevs_discovered": 1, 00:09:08.780 "num_base_bdevs_operational": 2, 00:09:08.780 "base_bdevs_list": [ 00:09:08.780 { 00:09:08.780 "name": "BaseBdev1", 00:09:08.780 "uuid": "a7f35a93-b655-409d-9782-cdf7148f0d1f", 00:09:08.780 "is_configured": true, 00:09:08.780 "data_offset": 2048, 00:09:08.780 "data_size": 63488 00:09:08.780 }, 00:09:08.780 { 00:09:08.780 "name": "BaseBdev2", 00:09:08.780 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:08.780 "is_configured": false, 00:09:08.780 "data_offset": 0, 00:09:08.780 "data_size": 0 00:09:08.780 } 00:09:08.780 ] 00:09:08.780 }' 00:09:08.780 14:08:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:08.780 14:08:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.347 14:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:09.347 14:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.347 14:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.347 [2024-11-27 14:08:46.422557] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:09.347 [2024-11-27 14:08:46.423187] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:09.347 [2024-11-27 14:08:46.423214] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:09.347 BaseBdev2 00:09:09.347 [2024-11-27 14:08:46.423660] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:09.347 [2024-11-27 14:08:46.423908] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:09.347 [2024-11-27 14:08:46.423940] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:09.347 [2024-11-27 14:08:46.424116] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:09.347 14:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.347 14:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:09.347 14:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:09.347 14:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:09.347 14:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:09.347 14:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:09.347 14:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:09.347 14:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:09.347 14:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.347 14:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.348 14:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.348 14:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:09.348 14:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.348 14:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.348 [ 00:09:09.348 { 00:09:09.348 "name": "BaseBdev2", 00:09:09.348 "aliases": [ 00:09:09.348 "0926023e-352e-4e9c-bd3f-d9c7ea8e0b0a" 00:09:09.348 ], 00:09:09.348 "product_name": "Malloc disk", 00:09:09.348 "block_size": 512, 00:09:09.348 "num_blocks": 65536, 00:09:09.348 "uuid": "0926023e-352e-4e9c-bd3f-d9c7ea8e0b0a", 00:09:09.348 "assigned_rate_limits": { 00:09:09.348 "rw_ios_per_sec": 0, 00:09:09.348 "rw_mbytes_per_sec": 0, 00:09:09.348 "r_mbytes_per_sec": 0, 00:09:09.348 "w_mbytes_per_sec": 0 00:09:09.348 }, 00:09:09.348 "claimed": true, 00:09:09.348 "claim_type": "exclusive_write", 00:09:09.348 "zoned": false, 00:09:09.348 "supported_io_types": { 00:09:09.348 "read": true, 00:09:09.348 "write": true, 00:09:09.348 "unmap": true, 00:09:09.348 "flush": true, 00:09:09.348 "reset": true, 00:09:09.348 "nvme_admin": false, 00:09:09.348 "nvme_io": false, 00:09:09.348 "nvme_io_md": false, 00:09:09.348 "write_zeroes": true, 00:09:09.348 "zcopy": true, 00:09:09.348 "get_zone_info": false, 00:09:09.348 "zone_management": false, 00:09:09.348 "zone_append": false, 00:09:09.348 "compare": false, 00:09:09.348 "compare_and_write": false, 00:09:09.348 "abort": true, 00:09:09.348 "seek_hole": false, 00:09:09.348 "seek_data": false, 00:09:09.348 "copy": true, 00:09:09.348 "nvme_iov_md": false 00:09:09.348 }, 00:09:09.348 "memory_domains": [ 00:09:09.348 { 00:09:09.348 "dma_device_id": "system", 00:09:09.348 "dma_device_type": 1 00:09:09.348 }, 00:09:09.348 { 00:09:09.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.348 "dma_device_type": 2 00:09:09.348 } 00:09:09.348 ], 00:09:09.348 "driver_specific": {} 00:09:09.348 } 00:09:09.348 ] 00:09:09.348 14:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.348 14:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:09.348 14:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:09.348 14:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:09.348 14:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:09:09.348 14:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:09.348 14:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:09.348 14:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:09.348 14:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:09.348 14:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:09.348 14:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:09.348 14:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:09.348 14:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:09.348 14:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:09.348 14:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:09.348 14:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.348 14:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.348 14:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:09.348 14:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.348 14:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:09.348 "name": "Existed_Raid", 00:09:09.348 "uuid": "e01ddff2-6730-423b-9637-59e2d17900be", 00:09:09.348 "strip_size_kb": 64, 00:09:09.348 "state": "online", 00:09:09.348 "raid_level": "concat", 00:09:09.348 "superblock": true, 00:09:09.348 "num_base_bdevs": 2, 00:09:09.348 "num_base_bdevs_discovered": 2, 00:09:09.348 "num_base_bdevs_operational": 2, 00:09:09.348 "base_bdevs_list": [ 00:09:09.348 { 00:09:09.348 "name": "BaseBdev1", 00:09:09.348 "uuid": "a7f35a93-b655-409d-9782-cdf7148f0d1f", 00:09:09.348 "is_configured": true, 00:09:09.348 "data_offset": 2048, 00:09:09.348 "data_size": 63488 00:09:09.348 }, 00:09:09.348 { 00:09:09.348 "name": "BaseBdev2", 00:09:09.348 "uuid": "0926023e-352e-4e9c-bd3f-d9c7ea8e0b0a", 00:09:09.348 "is_configured": true, 00:09:09.348 "data_offset": 2048, 00:09:09.348 "data_size": 63488 00:09:09.348 } 00:09:09.348 ] 00:09:09.348 }' 00:09:09.348 14:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:09.348 14:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.915 14:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:09.915 14:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:09.915 14:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:09.915 14:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:09.915 14:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:09.915 14:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:09.915 14:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:09.915 14:08:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:09.915 14:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.915 14:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.915 [2024-11-27 14:08:46.979179] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:09.915 14:08:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:09.915 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:09.915 "name": "Existed_Raid", 00:09:09.915 "aliases": [ 00:09:09.915 "e01ddff2-6730-423b-9637-59e2d17900be" 00:09:09.915 ], 00:09:09.915 "product_name": "Raid Volume", 00:09:09.915 "block_size": 512, 00:09:09.915 "num_blocks": 126976, 00:09:09.915 "uuid": "e01ddff2-6730-423b-9637-59e2d17900be", 00:09:09.915 "assigned_rate_limits": { 00:09:09.915 "rw_ios_per_sec": 0, 00:09:09.915 "rw_mbytes_per_sec": 0, 00:09:09.915 "r_mbytes_per_sec": 0, 00:09:09.915 "w_mbytes_per_sec": 0 00:09:09.915 }, 00:09:09.915 "claimed": false, 00:09:09.915 "zoned": false, 00:09:09.915 "supported_io_types": { 00:09:09.915 "read": true, 00:09:09.915 "write": true, 00:09:09.915 "unmap": true, 00:09:09.915 "flush": true, 00:09:09.915 "reset": true, 00:09:09.915 "nvme_admin": false, 00:09:09.915 "nvme_io": false, 00:09:09.915 "nvme_io_md": false, 00:09:09.915 "write_zeroes": true, 00:09:09.915 "zcopy": false, 00:09:09.915 "get_zone_info": false, 00:09:09.915 "zone_management": false, 00:09:09.915 "zone_append": false, 00:09:09.915 "compare": false, 00:09:09.915 "compare_and_write": false, 00:09:09.915 "abort": false, 00:09:09.915 "seek_hole": false, 00:09:09.915 "seek_data": false, 00:09:09.915 "copy": false, 00:09:09.915 "nvme_iov_md": false 00:09:09.915 }, 00:09:09.915 "memory_domains": [ 00:09:09.915 { 00:09:09.915 "dma_device_id": "system", 00:09:09.915 "dma_device_type": 1 00:09:09.915 }, 00:09:09.915 { 00:09:09.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.915 "dma_device_type": 2 00:09:09.915 }, 00:09:09.915 { 00:09:09.915 "dma_device_id": "system", 00:09:09.915 "dma_device_type": 1 00:09:09.915 }, 00:09:09.915 { 00:09:09.915 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:09.915 "dma_device_type": 2 00:09:09.915 } 00:09:09.915 ], 00:09:09.915 "driver_specific": { 00:09:09.915 "raid": { 00:09:09.915 "uuid": "e01ddff2-6730-423b-9637-59e2d17900be", 00:09:09.915 "strip_size_kb": 64, 00:09:09.915 "state": "online", 00:09:09.915 "raid_level": "concat", 00:09:09.915 "superblock": true, 00:09:09.915 "num_base_bdevs": 2, 00:09:09.915 "num_base_bdevs_discovered": 2, 00:09:09.915 "num_base_bdevs_operational": 2, 00:09:09.915 "base_bdevs_list": [ 00:09:09.915 { 00:09:09.915 "name": "BaseBdev1", 00:09:09.915 "uuid": "a7f35a93-b655-409d-9782-cdf7148f0d1f", 00:09:09.915 "is_configured": true, 00:09:09.915 "data_offset": 2048, 00:09:09.915 "data_size": 63488 00:09:09.915 }, 00:09:09.915 { 00:09:09.915 "name": "BaseBdev2", 00:09:09.915 "uuid": "0926023e-352e-4e9c-bd3f-d9c7ea8e0b0a", 00:09:09.915 "is_configured": true, 00:09:09.915 "data_offset": 2048, 00:09:09.915 "data_size": 63488 00:09:09.915 } 00:09:09.915 ] 00:09:09.915 } 00:09:09.915 } 00:09:09.915 }' 00:09:09.915 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:09.915 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:09.915 BaseBdev2' 00:09:09.915 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.915 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:09.915 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:09.915 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:09.915 14:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:09.915 14:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:09.915 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:09.915 14:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.173 [2024-11-27 14:08:47.246964] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:10.173 [2024-11-27 14:08:47.247141] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:10.173 [2024-11-27 14:08:47.247230] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:10.173 "name": "Existed_Raid", 00:09:10.173 "uuid": "e01ddff2-6730-423b-9637-59e2d17900be", 00:09:10.173 "strip_size_kb": 64, 00:09:10.173 "state": "offline", 00:09:10.173 "raid_level": "concat", 00:09:10.173 "superblock": true, 00:09:10.173 "num_base_bdevs": 2, 00:09:10.173 "num_base_bdevs_discovered": 1, 00:09:10.173 "num_base_bdevs_operational": 1, 00:09:10.173 "base_bdevs_list": [ 00:09:10.173 { 00:09:10.173 "name": null, 00:09:10.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:10.173 "is_configured": false, 00:09:10.173 "data_offset": 0, 00:09:10.173 "data_size": 63488 00:09:10.173 }, 00:09:10.173 { 00:09:10.173 "name": "BaseBdev2", 00:09:10.173 "uuid": "0926023e-352e-4e9c-bd3f-d9c7ea8e0b0a", 00:09:10.173 "is_configured": true, 00:09:10.173 "data_offset": 2048, 00:09:10.173 "data_size": 63488 00:09:10.173 } 00:09:10.173 ] 00:09:10.173 }' 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:10.173 14:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.740 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:10.740 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:10.740 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:10.740 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.740 14:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.740 14:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.740 14:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.740 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:10.740 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:10.740 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:10.740 14:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.740 14:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.740 [2024-11-27 14:08:47.887583] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:10.740 [2024-11-27 14:08:47.887650] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:10.740 14:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.740 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:10.740 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:10.740 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:10.740 14:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:10.740 14:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.740 14:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:10.740 14:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.999 14:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:10.999 14:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:10.999 14:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:10.999 14:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 61806 00:09:10.999 14:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 61806 ']' 00:09:10.999 14:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 61806 00:09:10.999 14:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:10.999 14:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:10.999 14:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61806 00:09:10.999 killing process with pid 61806 00:09:10.999 14:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:10.999 14:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:10.999 14:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61806' 00:09:10.999 14:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 61806 00:09:10.999 [2024-11-27 14:08:48.071145] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:10.999 14:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 61806 00:09:10.999 [2024-11-27 14:08:48.087250] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:11.948 14:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:11.948 00:09:11.948 real 0m5.606s 00:09:11.948 user 0m8.477s 00:09:11.948 sys 0m0.763s 00:09:11.948 14:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.948 14:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:11.948 ************************************ 00:09:11.948 END TEST raid_state_function_test_sb 00:09:11.948 ************************************ 00:09:11.948 14:08:49 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:09:11.948 14:08:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:11.948 14:08:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.948 14:08:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:11.948 ************************************ 00:09:11.948 START TEST raid_superblock_test 00:09:11.948 ************************************ 00:09:11.948 14:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 2 00:09:11.948 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:09:11.948 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:11.948 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:11.948 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:11.948 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:11.948 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:11.948 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:11.948 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:11.948 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:11.948 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:11.948 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:11.948 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:11.948 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:11.948 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:09:11.948 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:09:11.948 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:09:11.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:11.948 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=62064 00:09:11.948 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 62064 00:09:11.948 14:08:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:11.948 14:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 62064 ']' 00:09:11.948 14:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:11.948 14:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:11.948 14:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:11.948 14:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:11.948 14:08:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:12.207 [2024-11-27 14:08:49.308665] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:09:12.207 [2024-11-27 14:08:49.308881] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62064 ] 00:09:12.465 [2024-11-27 14:08:49.491359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.465 [2024-11-27 14:08:49.628213] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.723 [2024-11-27 14:08:49.837738] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:12.723 [2024-11-27 14:08:49.837827] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.293 14:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:13.293 14:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:13.293 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:13.293 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:13.293 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:13.293 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:13.293 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:13.293 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:13.293 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:13.293 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:13.293 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:13.293 14:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.293 14:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.293 malloc1 00:09:13.293 14:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.293 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:13.293 14:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.293 14:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.293 [2024-11-27 14:08:50.468968] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:13.293 [2024-11-27 14:08:50.469190] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.293 [2024-11-27 14:08:50.469271] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:13.293 [2024-11-27 14:08:50.469544] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.293 [2024-11-27 14:08:50.472562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.293 [2024-11-27 14:08:50.472830] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:13.293 pt1 00:09:13.293 14:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.294 malloc2 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.294 [2024-11-27 14:08:50.524587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:13.294 [2024-11-27 14:08:50.524678] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:13.294 [2024-11-27 14:08:50.524717] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:13.294 [2024-11-27 14:08:50.524732] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:13.294 [2024-11-27 14:08:50.527759] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:13.294 [2024-11-27 14:08:50.527977] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:13.294 pt2 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.294 [2024-11-27 14:08:50.536898] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:13.294 [2024-11-27 14:08:50.539349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:13.294 [2024-11-27 14:08:50.539746] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:13.294 [2024-11-27 14:08:50.539771] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:13.294 [2024-11-27 14:08:50.540139] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:13.294 [2024-11-27 14:08:50.540346] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:13.294 [2024-11-27 14:08:50.540373] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:13.294 [2024-11-27 14:08:50.540627] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.294 14:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:13.553 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:13.553 "name": "raid_bdev1", 00:09:13.553 "uuid": "02bb49d4-a738-4200-8990-771ebc49c7e6", 00:09:13.553 "strip_size_kb": 64, 00:09:13.553 "state": "online", 00:09:13.553 "raid_level": "concat", 00:09:13.553 "superblock": true, 00:09:13.553 "num_base_bdevs": 2, 00:09:13.553 "num_base_bdevs_discovered": 2, 00:09:13.553 "num_base_bdevs_operational": 2, 00:09:13.553 "base_bdevs_list": [ 00:09:13.553 { 00:09:13.553 "name": "pt1", 00:09:13.553 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:13.553 "is_configured": true, 00:09:13.553 "data_offset": 2048, 00:09:13.553 "data_size": 63488 00:09:13.553 }, 00:09:13.553 { 00:09:13.553 "name": "pt2", 00:09:13.553 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:13.553 "is_configured": true, 00:09:13.553 "data_offset": 2048, 00:09:13.553 "data_size": 63488 00:09:13.553 } 00:09:13.553 ] 00:09:13.553 }' 00:09:13.553 14:08:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:13.553 14:08:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.813 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:13.813 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:13.813 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:13.813 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:13.813 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:13.813 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:13.813 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:13.813 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:13.813 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:13.813 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.813 [2024-11-27 14:08:51.053566] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:13.813 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.072 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:14.072 "name": "raid_bdev1", 00:09:14.072 "aliases": [ 00:09:14.072 "02bb49d4-a738-4200-8990-771ebc49c7e6" 00:09:14.072 ], 00:09:14.072 "product_name": "Raid Volume", 00:09:14.072 "block_size": 512, 00:09:14.072 "num_blocks": 126976, 00:09:14.072 "uuid": "02bb49d4-a738-4200-8990-771ebc49c7e6", 00:09:14.072 "assigned_rate_limits": { 00:09:14.072 "rw_ios_per_sec": 0, 00:09:14.072 "rw_mbytes_per_sec": 0, 00:09:14.072 "r_mbytes_per_sec": 0, 00:09:14.072 "w_mbytes_per_sec": 0 00:09:14.072 }, 00:09:14.072 "claimed": false, 00:09:14.072 "zoned": false, 00:09:14.072 "supported_io_types": { 00:09:14.072 "read": true, 00:09:14.072 "write": true, 00:09:14.072 "unmap": true, 00:09:14.072 "flush": true, 00:09:14.072 "reset": true, 00:09:14.072 "nvme_admin": false, 00:09:14.072 "nvme_io": false, 00:09:14.072 "nvme_io_md": false, 00:09:14.072 "write_zeroes": true, 00:09:14.072 "zcopy": false, 00:09:14.072 "get_zone_info": false, 00:09:14.072 "zone_management": false, 00:09:14.072 "zone_append": false, 00:09:14.072 "compare": false, 00:09:14.072 "compare_and_write": false, 00:09:14.072 "abort": false, 00:09:14.072 "seek_hole": false, 00:09:14.072 "seek_data": false, 00:09:14.072 "copy": false, 00:09:14.072 "nvme_iov_md": false 00:09:14.072 }, 00:09:14.072 "memory_domains": [ 00:09:14.072 { 00:09:14.072 "dma_device_id": "system", 00:09:14.072 "dma_device_type": 1 00:09:14.072 }, 00:09:14.072 { 00:09:14.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.072 "dma_device_type": 2 00:09:14.072 }, 00:09:14.072 { 00:09:14.072 "dma_device_id": "system", 00:09:14.072 "dma_device_type": 1 00:09:14.072 }, 00:09:14.072 { 00:09:14.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:14.072 "dma_device_type": 2 00:09:14.072 } 00:09:14.072 ], 00:09:14.072 "driver_specific": { 00:09:14.072 "raid": { 00:09:14.072 "uuid": "02bb49d4-a738-4200-8990-771ebc49c7e6", 00:09:14.072 "strip_size_kb": 64, 00:09:14.072 "state": "online", 00:09:14.072 "raid_level": "concat", 00:09:14.072 "superblock": true, 00:09:14.072 "num_base_bdevs": 2, 00:09:14.072 "num_base_bdevs_discovered": 2, 00:09:14.072 "num_base_bdevs_operational": 2, 00:09:14.072 "base_bdevs_list": [ 00:09:14.072 { 00:09:14.072 "name": "pt1", 00:09:14.072 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:14.072 "is_configured": true, 00:09:14.072 "data_offset": 2048, 00:09:14.072 "data_size": 63488 00:09:14.072 }, 00:09:14.072 { 00:09:14.072 "name": "pt2", 00:09:14.072 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:14.072 "is_configured": true, 00:09:14.072 "data_offset": 2048, 00:09:14.072 "data_size": 63488 00:09:14.072 } 00:09:14.072 ] 00:09:14.072 } 00:09:14.072 } 00:09:14.072 }' 00:09:14.072 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:14.072 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:14.072 pt2' 00:09:14.072 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.072 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:14.072 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.072 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.072 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:14.072 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.072 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.072 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.072 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:14.072 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:14.072 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:14.072 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:14.072 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.072 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.072 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:14.072 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.072 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:14.072 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:14.072 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:14.072 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.072 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.072 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:14.072 [2024-11-27 14:08:51.301506] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:14.072 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=02bb49d4-a738-4200-8990-771ebc49c7e6 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 02bb49d4-a738-4200-8990-771ebc49c7e6 ']' 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.332 [2024-11-27 14:08:51.353183] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:14.332 [2024-11-27 14:08:51.353213] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:14.332 [2024-11-27 14:08:51.353304] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:14.332 [2024-11-27 14:08:51.353390] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:14.332 [2024-11-27 14:08:51.353411] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.332 [2024-11-27 14:08:51.489266] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:14.332 [2024-11-27 14:08:51.491862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:14.332 [2024-11-27 14:08:51.491951] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:14.332 [2024-11-27 14:08:51.492027] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:14.332 [2024-11-27 14:08:51.492055] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:14.332 [2024-11-27 14:08:51.492071] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:14.332 request: 00:09:14.332 { 00:09:14.332 "name": "raid_bdev1", 00:09:14.332 "raid_level": "concat", 00:09:14.332 "base_bdevs": [ 00:09:14.332 "malloc1", 00:09:14.332 "malloc2" 00:09:14.332 ], 00:09:14.332 "strip_size_kb": 64, 00:09:14.332 "superblock": false, 00:09:14.332 "method": "bdev_raid_create", 00:09:14.332 "req_id": 1 00:09:14.332 } 00:09:14.332 Got JSON-RPC error response 00:09:14.332 response: 00:09:14.332 { 00:09:14.332 "code": -17, 00:09:14.332 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:14.332 } 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.332 [2024-11-27 14:08:51.561320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:14.332 [2024-11-27 14:08:51.561626] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.332 [2024-11-27 14:08:51.561662] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:14.332 [2024-11-27 14:08:51.561682] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.332 [2024-11-27 14:08:51.564737] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.332 [2024-11-27 14:08:51.564857] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:14.332 [2024-11-27 14:08:51.564968] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:14.332 [2024-11-27 14:08:51.565043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:14.332 pt1 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:09:14.332 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:14.333 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:14.333 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.333 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.333 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:14.333 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.333 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.333 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.333 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.333 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.333 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.333 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:14.333 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.333 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.592 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:14.592 "name": "raid_bdev1", 00:09:14.592 "uuid": "02bb49d4-a738-4200-8990-771ebc49c7e6", 00:09:14.592 "strip_size_kb": 64, 00:09:14.592 "state": "configuring", 00:09:14.592 "raid_level": "concat", 00:09:14.592 "superblock": true, 00:09:14.592 "num_base_bdevs": 2, 00:09:14.592 "num_base_bdevs_discovered": 1, 00:09:14.592 "num_base_bdevs_operational": 2, 00:09:14.592 "base_bdevs_list": [ 00:09:14.592 { 00:09:14.592 "name": "pt1", 00:09:14.592 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:14.592 "is_configured": true, 00:09:14.592 "data_offset": 2048, 00:09:14.592 "data_size": 63488 00:09:14.592 }, 00:09:14.592 { 00:09:14.592 "name": null, 00:09:14.592 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:14.592 "is_configured": false, 00:09:14.592 "data_offset": 2048, 00:09:14.592 "data_size": 63488 00:09:14.592 } 00:09:14.592 ] 00:09:14.592 }' 00:09:14.592 14:08:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:14.592 14:08:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.851 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:14.851 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:14.851 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:14.851 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:14.851 14:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.851 14:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.851 [2024-11-27 14:08:52.081547] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:14.851 [2024-11-27 14:08:52.081652] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:14.851 [2024-11-27 14:08:52.081683] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:14.852 [2024-11-27 14:08:52.081705] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:14.852 [2024-11-27 14:08:52.082425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:14.852 [2024-11-27 14:08:52.082475] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:14.852 [2024-11-27 14:08:52.082579] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:14.852 [2024-11-27 14:08:52.082642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:14.852 [2024-11-27 14:08:52.082813] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:14.852 [2024-11-27 14:08:52.082836] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:14.852 [2024-11-27 14:08:52.083161] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:14.852 [2024-11-27 14:08:52.083337] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:14.852 [2024-11-27 14:08:52.083351] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:14.852 [2024-11-27 14:08:52.083525] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:14.852 pt2 00:09:14.852 14:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:14.852 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:14.852 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:14.852 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:14.852 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:14.852 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:14.852 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:14.852 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:14.852 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:14.852 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:14.852 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:14.852 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:14.852 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:14.852 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:14.852 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:14.852 14:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:14.852 14:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:14.852 14:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.110 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:15.110 "name": "raid_bdev1", 00:09:15.110 "uuid": "02bb49d4-a738-4200-8990-771ebc49c7e6", 00:09:15.110 "strip_size_kb": 64, 00:09:15.110 "state": "online", 00:09:15.110 "raid_level": "concat", 00:09:15.110 "superblock": true, 00:09:15.110 "num_base_bdevs": 2, 00:09:15.110 "num_base_bdevs_discovered": 2, 00:09:15.110 "num_base_bdevs_operational": 2, 00:09:15.110 "base_bdevs_list": [ 00:09:15.110 { 00:09:15.110 "name": "pt1", 00:09:15.110 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:15.110 "is_configured": true, 00:09:15.110 "data_offset": 2048, 00:09:15.110 "data_size": 63488 00:09:15.110 }, 00:09:15.110 { 00:09:15.110 "name": "pt2", 00:09:15.110 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:15.110 "is_configured": true, 00:09:15.110 "data_offset": 2048, 00:09:15.110 "data_size": 63488 00:09:15.110 } 00:09:15.110 ] 00:09:15.110 }' 00:09:15.110 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:15.110 14:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.369 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:15.369 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:15.369 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:15.369 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:15.369 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:15.369 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:15.369 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:15.369 14:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.369 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:15.369 14:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.369 [2024-11-27 14:08:52.633992] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:15.627 14:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.627 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:15.627 "name": "raid_bdev1", 00:09:15.627 "aliases": [ 00:09:15.627 "02bb49d4-a738-4200-8990-771ebc49c7e6" 00:09:15.627 ], 00:09:15.627 "product_name": "Raid Volume", 00:09:15.627 "block_size": 512, 00:09:15.627 "num_blocks": 126976, 00:09:15.627 "uuid": "02bb49d4-a738-4200-8990-771ebc49c7e6", 00:09:15.627 "assigned_rate_limits": { 00:09:15.627 "rw_ios_per_sec": 0, 00:09:15.627 "rw_mbytes_per_sec": 0, 00:09:15.627 "r_mbytes_per_sec": 0, 00:09:15.627 "w_mbytes_per_sec": 0 00:09:15.627 }, 00:09:15.627 "claimed": false, 00:09:15.627 "zoned": false, 00:09:15.627 "supported_io_types": { 00:09:15.627 "read": true, 00:09:15.627 "write": true, 00:09:15.627 "unmap": true, 00:09:15.627 "flush": true, 00:09:15.627 "reset": true, 00:09:15.627 "nvme_admin": false, 00:09:15.627 "nvme_io": false, 00:09:15.627 "nvme_io_md": false, 00:09:15.627 "write_zeroes": true, 00:09:15.627 "zcopy": false, 00:09:15.627 "get_zone_info": false, 00:09:15.627 "zone_management": false, 00:09:15.627 "zone_append": false, 00:09:15.627 "compare": false, 00:09:15.627 "compare_and_write": false, 00:09:15.627 "abort": false, 00:09:15.627 "seek_hole": false, 00:09:15.627 "seek_data": false, 00:09:15.627 "copy": false, 00:09:15.627 "nvme_iov_md": false 00:09:15.627 }, 00:09:15.627 "memory_domains": [ 00:09:15.627 { 00:09:15.627 "dma_device_id": "system", 00:09:15.627 "dma_device_type": 1 00:09:15.627 }, 00:09:15.627 { 00:09:15.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.627 "dma_device_type": 2 00:09:15.627 }, 00:09:15.627 { 00:09:15.627 "dma_device_id": "system", 00:09:15.627 "dma_device_type": 1 00:09:15.627 }, 00:09:15.627 { 00:09:15.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.627 "dma_device_type": 2 00:09:15.627 } 00:09:15.627 ], 00:09:15.627 "driver_specific": { 00:09:15.627 "raid": { 00:09:15.627 "uuid": "02bb49d4-a738-4200-8990-771ebc49c7e6", 00:09:15.627 "strip_size_kb": 64, 00:09:15.627 "state": "online", 00:09:15.627 "raid_level": "concat", 00:09:15.627 "superblock": true, 00:09:15.627 "num_base_bdevs": 2, 00:09:15.627 "num_base_bdevs_discovered": 2, 00:09:15.627 "num_base_bdevs_operational": 2, 00:09:15.627 "base_bdevs_list": [ 00:09:15.627 { 00:09:15.627 "name": "pt1", 00:09:15.627 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:15.627 "is_configured": true, 00:09:15.627 "data_offset": 2048, 00:09:15.627 "data_size": 63488 00:09:15.627 }, 00:09:15.627 { 00:09:15.627 "name": "pt2", 00:09:15.627 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:15.627 "is_configured": true, 00:09:15.627 "data_offset": 2048, 00:09:15.627 "data_size": 63488 00:09:15.627 } 00:09:15.627 ] 00:09:15.627 } 00:09:15.627 } 00:09:15.627 }' 00:09:15.627 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:15.627 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:15.627 pt2' 00:09:15.627 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.627 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:15.627 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.627 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:15.627 14:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.627 14:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.627 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.628 14:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.628 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.628 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.628 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:15.628 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:15.628 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:15.628 14:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.628 14:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.628 14:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.628 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:15.628 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:15.628 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:15.628 14:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:15.628 14:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.628 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:15.885 [2024-11-27 14:08:52.906010] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:15.885 14:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:15.885 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 02bb49d4-a738-4200-8990-771ebc49c7e6 '!=' 02bb49d4-a738-4200-8990-771ebc49c7e6 ']' 00:09:15.885 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:09:15.885 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:15.885 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:15.885 14:08:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 62064 00:09:15.885 14:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 62064 ']' 00:09:15.885 14:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 62064 00:09:15.885 14:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:15.885 14:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:15.885 14:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62064 00:09:15.885 killing process with pid 62064 00:09:15.885 14:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:15.885 14:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:15.885 14:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62064' 00:09:15.885 14:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 62064 00:09:15.885 [2024-11-27 14:08:52.988546] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:15.886 [2024-11-27 14:08:52.988646] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:15.886 14:08:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 62064 00:09:15.886 [2024-11-27 14:08:52.988720] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:15.886 [2024-11-27 14:08:52.988743] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:16.143 [2024-11-27 14:08:53.179740] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:17.078 14:08:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:17.078 00:09:17.078 real 0m4.985s 00:09:17.078 user 0m7.400s 00:09:17.078 sys 0m0.737s 00:09:17.078 14:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:17.078 ************************************ 00:09:17.078 END TEST raid_superblock_test 00:09:17.078 ************************************ 00:09:17.078 14:08:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.079 14:08:54 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:09:17.079 14:08:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:17.079 14:08:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:17.079 14:08:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:17.079 ************************************ 00:09:17.079 START TEST raid_read_error_test 00:09:17.079 ************************************ 00:09:17.079 14:08:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 read 00:09:17.079 14:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:17.079 14:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:17.079 14:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:17.079 14:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:17.079 14:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.079 14:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:17.079 14:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:17.079 14:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.079 14:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:17.079 14:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:17.079 14:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:17.079 14:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:17.079 14:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:17.079 14:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:17.079 14:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:17.079 14:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:17.079 14:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:17.079 14:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:17.079 14:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:17.079 14:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:17.079 14:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:17.079 14:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:17.079 14:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.t7mJr6SQLx 00:09:17.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.079 14:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62282 00:09:17.079 14:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62282 00:09:17.079 14:08:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:17.079 14:08:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 62282 ']' 00:09:17.079 14:08:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.079 14:08:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.079 14:08:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.079 14:08:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.079 14:08:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.405 [2024-11-27 14:08:54.361236] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:09:17.405 [2024-11-27 14:08:54.361657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62282 ] 00:09:17.405 [2024-11-27 14:08:54.551043] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.681 [2024-11-27 14:08:54.713570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.681 [2024-11-27 14:08:54.924248] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:17.681 [2024-11-27 14:08:54.924327] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:18.249 14:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.249 14:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:18.249 14:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:18.249 14:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:18.249 14:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.249 14:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.249 BaseBdev1_malloc 00:09:18.249 14:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.249 14:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:18.249 14:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.249 14:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.249 true 00:09:18.249 14:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.249 14:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:18.249 14:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.249 14:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.249 [2024-11-27 14:08:55.414287] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:18.249 [2024-11-27 14:08:55.414379] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.249 [2024-11-27 14:08:55.414406] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:18.249 [2024-11-27 14:08:55.414422] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.249 [2024-11-27 14:08:55.417113] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.249 [2024-11-27 14:08:55.417204] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:18.249 BaseBdev1 00:09:18.249 14:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.249 14:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:18.250 14:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:18.250 14:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.250 14:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.250 BaseBdev2_malloc 00:09:18.250 14:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.250 14:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:18.250 14:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.250 14:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.250 true 00:09:18.250 14:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.250 14:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:18.250 14:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.250 14:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.250 [2024-11-27 14:08:55.469441] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:18.250 [2024-11-27 14:08:55.469521] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:18.250 [2024-11-27 14:08:55.469546] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:18.250 [2024-11-27 14:08:55.469562] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:18.250 [2024-11-27 14:08:55.472571] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:18.250 [2024-11-27 14:08:55.472633] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:18.250 BaseBdev2 00:09:18.250 14:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.250 14:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:18.250 14:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.250 14:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.250 [2024-11-27 14:08:55.477585] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:18.250 [2024-11-27 14:08:55.480420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:18.250 [2024-11-27 14:08:55.480676] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:18.250 [2024-11-27 14:08:55.480715] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:18.250 [2024-11-27 14:08:55.481107] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:18.250 [2024-11-27 14:08:55.481386] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:18.250 [2024-11-27 14:08:55.481409] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:18.250 [2024-11-27 14:08:55.481643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:18.250 14:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.250 14:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:18.250 14:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:18.250 14:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:18.250 14:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:18.250 14:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:18.250 14:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:18.250 14:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:18.250 14:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:18.250 14:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:18.250 14:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:18.250 14:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:18.250 14:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:18.250 14:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.250 14:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.250 14:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.508 14:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:18.508 "name": "raid_bdev1", 00:09:18.508 "uuid": "2579eb11-3219-45a4-8766-a1e6a9b80387", 00:09:18.508 "strip_size_kb": 64, 00:09:18.508 "state": "online", 00:09:18.508 "raid_level": "concat", 00:09:18.508 "superblock": true, 00:09:18.508 "num_base_bdevs": 2, 00:09:18.508 "num_base_bdevs_discovered": 2, 00:09:18.508 "num_base_bdevs_operational": 2, 00:09:18.508 "base_bdevs_list": [ 00:09:18.508 { 00:09:18.508 "name": "BaseBdev1", 00:09:18.508 "uuid": "f3881149-2d0b-526b-a52c-c1659b7315b8", 00:09:18.508 "is_configured": true, 00:09:18.508 "data_offset": 2048, 00:09:18.508 "data_size": 63488 00:09:18.508 }, 00:09:18.508 { 00:09:18.508 "name": "BaseBdev2", 00:09:18.508 "uuid": "56750f74-bcf4-596c-bd02-aec295f25280", 00:09:18.508 "is_configured": true, 00:09:18.508 "data_offset": 2048, 00:09:18.508 "data_size": 63488 00:09:18.508 } 00:09:18.508 ] 00:09:18.508 }' 00:09:18.508 14:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:18.508 14:08:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:18.767 14:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:18.767 14:08:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:19.026 [2024-11-27 14:08:56.055195] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:19.964 14:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:19.964 14:08:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.964 14:08:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.964 14:08:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.964 14:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:19.964 14:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:19.964 14:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:19.964 14:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:19.964 14:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:19.964 14:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:19.964 14:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:19.964 14:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:19.964 14:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:19.964 14:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:19.964 14:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:19.964 14:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:19.964 14:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:19.964 14:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:19.964 14:08:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:19.964 14:08:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:19.964 14:08:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.964 14:08:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:19.964 14:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:19.964 "name": "raid_bdev1", 00:09:19.964 "uuid": "2579eb11-3219-45a4-8766-a1e6a9b80387", 00:09:19.964 "strip_size_kb": 64, 00:09:19.964 "state": "online", 00:09:19.964 "raid_level": "concat", 00:09:19.964 "superblock": true, 00:09:19.964 "num_base_bdevs": 2, 00:09:19.964 "num_base_bdevs_discovered": 2, 00:09:19.964 "num_base_bdevs_operational": 2, 00:09:19.964 "base_bdevs_list": [ 00:09:19.964 { 00:09:19.964 "name": "BaseBdev1", 00:09:19.964 "uuid": "f3881149-2d0b-526b-a52c-c1659b7315b8", 00:09:19.964 "is_configured": true, 00:09:19.965 "data_offset": 2048, 00:09:19.965 "data_size": 63488 00:09:19.965 }, 00:09:19.965 { 00:09:19.965 "name": "BaseBdev2", 00:09:19.965 "uuid": "56750f74-bcf4-596c-bd02-aec295f25280", 00:09:19.965 "is_configured": true, 00:09:19.965 "data_offset": 2048, 00:09:19.965 "data_size": 63488 00:09:19.965 } 00:09:19.965 ] 00:09:19.965 }' 00:09:19.965 14:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:19.965 14:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.223 14:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:20.223 14:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:20.223 14:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:20.223 [2024-11-27 14:08:57.482454] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:20.223 [2024-11-27 14:08:57.482642] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:20.223 [2024-11-27 14:08:57.486152] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.223 [2024-11-27 14:08:57.486331] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:20.223 [2024-11-27 14:08:57.486419] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:09:20.223 "results": [ 00:09:20.223 { 00:09:20.223 "job": "raid_bdev1", 00:09:20.223 "core_mask": "0x1", 00:09:20.223 "workload": "randrw", 00:09:20.223 "percentage": 50, 00:09:20.223 "status": "finished", 00:09:20.223 "queue_depth": 1, 00:09:20.223 "io_size": 131072, 00:09:20.223 "runtime": 1.425306, 00:09:20.223 "iops": 10747.867475475441, 00:09:20.223 "mibps": 1343.4834344344301, 00:09:20.223 "io_failed": 1, 00:09:20.223 "io_timeout": 0, 00:09:20.223 "avg_latency_us": 129.18612675053404, 00:09:20.224 "min_latency_us": 36.53818181818182, 00:09:20.224 "max_latency_us": 1876.7127272727273 00:09:20.224 } 00:09:20.224 ], 00:09:20.224 "core_count": 1 00:09:20.224 } 00:09:20.224 ee all in destruct 00:09:20.224 [2024-11-27 14:08:57.486573] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:20.224 14:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:20.224 14:08:57 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62282 00:09:20.224 14:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 62282 ']' 00:09:20.224 14:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 62282 00:09:20.224 14:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:20.224 14:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:20.224 14:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62282 00:09:20.482 14:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:20.482 14:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:20.482 14:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62282' 00:09:20.482 killing process with pid 62282 00:09:20.482 14:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 62282 00:09:20.482 [2024-11-27 14:08:57.524105] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:20.482 14:08:57 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 62282 00:09:20.482 [2024-11-27 14:08:57.645290] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:21.856 14:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.t7mJr6SQLx 00:09:21.856 14:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:21.856 14:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:21.856 14:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:09:21.856 14:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:21.856 ************************************ 00:09:21.856 END TEST raid_read_error_test 00:09:21.856 ************************************ 00:09:21.856 14:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:21.856 14:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:21.856 14:08:58 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:09:21.856 00:09:21.856 real 0m4.502s 00:09:21.856 user 0m5.618s 00:09:21.856 sys 0m0.534s 00:09:21.856 14:08:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.856 14:08:58 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.856 14:08:58 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:09:21.856 14:08:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:21.856 14:08:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.856 14:08:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:21.856 ************************************ 00:09:21.856 START TEST raid_write_error_test 00:09:21.856 ************************************ 00:09:21.856 14:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 2 write 00:09:21.856 14:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:09:21.856 14:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:21.856 14:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:21.856 14:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:21.856 14:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.856 14:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:21.856 14:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:21.856 14:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.856 14:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:21.856 14:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:21.856 14:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:21.856 14:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:21.856 14:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:21.856 14:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:21.856 14:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:21.856 14:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:21.856 14:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:21.856 14:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:21.856 14:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:09:21.856 14:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:09:21.856 14:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:09:21.856 14:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:21.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:21.856 14:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.v4iYZk8RKu 00:09:21.856 14:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=62422 00:09:21.856 14:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 62422 00:09:21.856 14:08:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:21.856 14:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 62422 ']' 00:09:21.856 14:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:21.856 14:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:21.856 14:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:21.856 14:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:21.856 14:08:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.856 [2024-11-27 14:08:58.895470] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:09:21.856 [2024-11-27 14:08:58.895620] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62422 ] 00:09:21.856 [2024-11-27 14:08:59.066757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:22.203 [2024-11-27 14:08:59.194881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:22.203 [2024-11-27 14:08:59.396676] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.203 [2024-11-27 14:08:59.396734] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:22.767 14:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:22.767 14:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:22.767 14:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:22.767 14:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:22.767 14:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.767 14:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.767 BaseBdev1_malloc 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.768 true 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.768 [2024-11-27 14:08:59.928541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:22.768 [2024-11-27 14:08:59.928749] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.768 [2024-11-27 14:08:59.928805] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:22.768 [2024-11-27 14:08:59.928826] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.768 [2024-11-27 14:08:59.931546] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.768 [2024-11-27 14:08:59.931598] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:22.768 BaseBdev1 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.768 BaseBdev2_malloc 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.768 true 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.768 [2024-11-27 14:08:59.984363] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:22.768 [2024-11-27 14:08:59.984433] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:22.768 [2024-11-27 14:08:59.984457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:22.768 [2024-11-27 14:08:59.984473] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:22.768 [2024-11-27 14:08:59.987257] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:22.768 [2024-11-27 14:08:59.987439] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:22.768 BaseBdev2 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.768 [2024-11-27 14:08:59.992437] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:22.768 [2024-11-27 14:08:59.994846] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:22.768 [2024-11-27 14:08:59.995095] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:22.768 [2024-11-27 14:08:59.995119] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:09:22.768 [2024-11-27 14:08:59.995432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:22.768 [2024-11-27 14:08:59.995648] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:22.768 [2024-11-27 14:08:59.995670] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:22.768 [2024-11-27 14:08:59.995892] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:22.768 14:08:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:22.768 14:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:23.025 14:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:23.025 "name": "raid_bdev1", 00:09:23.025 "uuid": "71235df7-a005-40c2-9a94-d707b738b757", 00:09:23.025 "strip_size_kb": 64, 00:09:23.025 "state": "online", 00:09:23.025 "raid_level": "concat", 00:09:23.025 "superblock": true, 00:09:23.025 "num_base_bdevs": 2, 00:09:23.025 "num_base_bdevs_discovered": 2, 00:09:23.025 "num_base_bdevs_operational": 2, 00:09:23.025 "base_bdevs_list": [ 00:09:23.025 { 00:09:23.025 "name": "BaseBdev1", 00:09:23.025 "uuid": "3c37f187-34d5-5de1-8458-575d31d6c9e9", 00:09:23.025 "is_configured": true, 00:09:23.025 "data_offset": 2048, 00:09:23.025 "data_size": 63488 00:09:23.025 }, 00:09:23.025 { 00:09:23.025 "name": "BaseBdev2", 00:09:23.025 "uuid": "9678a5a4-c95f-52b1-8803-0bb6383bcda5", 00:09:23.025 "is_configured": true, 00:09:23.025 "data_offset": 2048, 00:09:23.025 "data_size": 63488 00:09:23.025 } 00:09:23.025 ] 00:09:23.025 }' 00:09:23.025 14:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:23.025 14:09:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:23.282 14:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:23.282 14:09:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:23.539 [2024-11-27 14:09:00.629989] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:24.472 14:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:24.472 14:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.472 14:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.472 14:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.472 14:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:24.472 14:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:09:24.472 14:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:24.472 14:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:09:24.472 14:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:24.472 14:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:24.472 14:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:09:24.472 14:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:24.472 14:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:24.472 14:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:24.472 14:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:24.472 14:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:24.472 14:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:24.472 14:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:24.472 14:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:24.472 14:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:24.472 14:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:24.472 14:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:24.472 14:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:24.472 "name": "raid_bdev1", 00:09:24.472 "uuid": "71235df7-a005-40c2-9a94-d707b738b757", 00:09:24.472 "strip_size_kb": 64, 00:09:24.472 "state": "online", 00:09:24.472 "raid_level": "concat", 00:09:24.472 "superblock": true, 00:09:24.472 "num_base_bdevs": 2, 00:09:24.472 "num_base_bdevs_discovered": 2, 00:09:24.472 "num_base_bdevs_operational": 2, 00:09:24.472 "base_bdevs_list": [ 00:09:24.472 { 00:09:24.472 "name": "BaseBdev1", 00:09:24.472 "uuid": "3c37f187-34d5-5de1-8458-575d31d6c9e9", 00:09:24.472 "is_configured": true, 00:09:24.472 "data_offset": 2048, 00:09:24.472 "data_size": 63488 00:09:24.472 }, 00:09:24.472 { 00:09:24.472 "name": "BaseBdev2", 00:09:24.472 "uuid": "9678a5a4-c95f-52b1-8803-0bb6383bcda5", 00:09:24.472 "is_configured": true, 00:09:24.472 "data_offset": 2048, 00:09:24.472 "data_size": 63488 00:09:24.472 } 00:09:24.472 ] 00:09:24.472 }' 00:09:24.472 14:09:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:24.472 14:09:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.038 14:09:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:25.038 14:09:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:25.038 14:09:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.038 [2024-11-27 14:09:02.032373] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:25.038 [2024-11-27 14:09:02.032416] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:25.038 [2024-11-27 14:09:02.035962] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:25.038 [2024-11-27 14:09:02.036151] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:25.038 [2024-11-27 14:09:02.036239] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:25.038 [2024-11-27 14:09:02.036509] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:25.038 { 00:09:25.038 "results": [ 00:09:25.038 { 00:09:25.038 "job": "raid_bdev1", 00:09:25.038 "core_mask": "0x1", 00:09:25.038 "workload": "randrw", 00:09:25.038 "percentage": 50, 00:09:25.038 "status": "finished", 00:09:25.038 "queue_depth": 1, 00:09:25.038 "io_size": 131072, 00:09:25.038 "runtime": 1.400204, 00:09:25.038 "iops": 11030.535550534065, 00:09:25.038 "mibps": 1378.816943816758, 00:09:25.038 "io_failed": 1, 00:09:25.038 "io_timeout": 0, 00:09:25.038 "avg_latency_us": 125.8916153637894, 00:09:25.038 "min_latency_us": 41.89090909090909, 00:09:25.038 "max_latency_us": 1817.1345454545456 00:09:25.038 } 00:09:25.038 ], 00:09:25.038 "core_count": 1 00:09:25.038 } 00:09:25.038 14:09:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:25.038 14:09:02 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 62422 00:09:25.038 14:09:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 62422 ']' 00:09:25.038 14:09:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 62422 00:09:25.038 14:09:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:25.038 14:09:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:25.038 14:09:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62422 00:09:25.038 killing process with pid 62422 00:09:25.038 14:09:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:25.038 14:09:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:25.038 14:09:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62422' 00:09:25.038 14:09:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 62422 00:09:25.038 [2024-11-27 14:09:02.068793] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:25.038 14:09:02 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 62422 00:09:25.038 [2024-11-27 14:09:02.190336] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:26.411 14:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:26.411 14:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.v4iYZk8RKu 00:09:26.411 14:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:26.411 14:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:09:26.411 14:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:09:26.411 14:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:26.411 14:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:26.411 14:09:03 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:09:26.411 00:09:26.411 real 0m4.499s 00:09:26.411 user 0m5.669s 00:09:26.411 sys 0m0.512s 00:09:26.411 14:09:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:26.411 ************************************ 00:09:26.411 END TEST raid_write_error_test 00:09:26.411 ************************************ 00:09:26.411 14:09:03 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.411 14:09:03 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:26.411 14:09:03 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:09:26.411 14:09:03 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:26.411 14:09:03 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:26.411 14:09:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:26.411 ************************************ 00:09:26.411 START TEST raid_state_function_test 00:09:26.411 ************************************ 00:09:26.411 14:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 false 00:09:26.411 14:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:26.411 14:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:26.411 14:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:26.411 14:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:26.411 14:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:26.411 14:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:26.411 14:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:26.411 14:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:26.411 14:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:26.411 14:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:26.411 14:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:26.411 14:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:26.411 14:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:26.411 14:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:26.411 14:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:26.411 14:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:26.412 14:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:26.412 14:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:26.412 14:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:26.412 14:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:26.412 14:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:26.412 14:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:26.412 14:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=62566 00:09:26.412 14:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:26.412 Process raid pid: 62566 00:09:26.412 14:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62566' 00:09:26.412 14:09:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 62566 00:09:26.412 14:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 62566 ']' 00:09:26.412 14:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:26.412 14:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:26.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:26.412 14:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:26.412 14:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:26.412 14:09:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.412 [2024-11-27 14:09:03.465171] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:09:26.412 [2024-11-27 14:09:03.465329] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:26.412 [2024-11-27 14:09:03.641701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:26.669 [2024-11-27 14:09:03.772750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.926 [2024-11-27 14:09:03.978370] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:26.927 [2024-11-27 14:09:03.978428] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:27.184 14:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:27.184 14:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:27.184 14:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:27.184 14:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.184 14:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.184 [2024-11-27 14:09:04.425494] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:27.184 [2024-11-27 14:09:04.425555] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:27.184 [2024-11-27 14:09:04.425572] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:27.184 [2024-11-27 14:09:04.425587] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:27.184 14:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.184 14:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:27.184 14:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.184 14:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.184 14:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.184 14:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.184 14:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:27.184 14:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.184 14:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.184 14:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.184 14:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.184 14:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.185 14:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.185 14:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.185 14:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.185 14:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.460 14:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.460 "name": "Existed_Raid", 00:09:27.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.460 "strip_size_kb": 0, 00:09:27.460 "state": "configuring", 00:09:27.460 "raid_level": "raid1", 00:09:27.460 "superblock": false, 00:09:27.460 "num_base_bdevs": 2, 00:09:27.460 "num_base_bdevs_discovered": 0, 00:09:27.460 "num_base_bdevs_operational": 2, 00:09:27.460 "base_bdevs_list": [ 00:09:27.460 { 00:09:27.460 "name": "BaseBdev1", 00:09:27.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.460 "is_configured": false, 00:09:27.460 "data_offset": 0, 00:09:27.460 "data_size": 0 00:09:27.460 }, 00:09:27.460 { 00:09:27.460 "name": "BaseBdev2", 00:09:27.460 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.460 "is_configured": false, 00:09:27.460 "data_offset": 0, 00:09:27.460 "data_size": 0 00:09:27.460 } 00:09:27.460 ] 00:09:27.460 }' 00:09:27.460 14:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.460 14:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.725 14:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:27.725 14:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.725 14:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.726 [2024-11-27 14:09:04.953567] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:27.726 [2024-11-27 14:09:04.953612] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:27.726 14:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.726 14:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:27.726 14:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.726 14:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.726 [2024-11-27 14:09:04.961571] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:27.726 [2024-11-27 14:09:04.961626] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:27.726 [2024-11-27 14:09:04.961641] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:27.726 [2024-11-27 14:09:04.961659] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:27.726 14:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.726 14:09:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:27.726 14:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.726 14:09:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.984 [2024-11-27 14:09:05.006155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:27.984 BaseBdev1 00:09:27.984 14:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.984 14:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:27.984 14:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:27.984 14:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:27.984 14:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:27.984 14:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:27.984 14:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:27.984 14:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:27.984 14:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.984 14:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.984 14:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.984 14:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:27.984 14:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.984 14:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.984 [ 00:09:27.984 { 00:09:27.984 "name": "BaseBdev1", 00:09:27.984 "aliases": [ 00:09:27.984 "8fbdd1db-10a2-4d5a-b9b7-52eedecd4c36" 00:09:27.984 ], 00:09:27.984 "product_name": "Malloc disk", 00:09:27.984 "block_size": 512, 00:09:27.984 "num_blocks": 65536, 00:09:27.984 "uuid": "8fbdd1db-10a2-4d5a-b9b7-52eedecd4c36", 00:09:27.984 "assigned_rate_limits": { 00:09:27.984 "rw_ios_per_sec": 0, 00:09:27.984 "rw_mbytes_per_sec": 0, 00:09:27.984 "r_mbytes_per_sec": 0, 00:09:27.984 "w_mbytes_per_sec": 0 00:09:27.984 }, 00:09:27.984 "claimed": true, 00:09:27.984 "claim_type": "exclusive_write", 00:09:27.984 "zoned": false, 00:09:27.984 "supported_io_types": { 00:09:27.984 "read": true, 00:09:27.984 "write": true, 00:09:27.984 "unmap": true, 00:09:27.984 "flush": true, 00:09:27.984 "reset": true, 00:09:27.984 "nvme_admin": false, 00:09:27.984 "nvme_io": false, 00:09:27.984 "nvme_io_md": false, 00:09:27.984 "write_zeroes": true, 00:09:27.984 "zcopy": true, 00:09:27.984 "get_zone_info": false, 00:09:27.984 "zone_management": false, 00:09:27.984 "zone_append": false, 00:09:27.984 "compare": false, 00:09:27.984 "compare_and_write": false, 00:09:27.984 "abort": true, 00:09:27.984 "seek_hole": false, 00:09:27.984 "seek_data": false, 00:09:27.984 "copy": true, 00:09:27.984 "nvme_iov_md": false 00:09:27.984 }, 00:09:27.984 "memory_domains": [ 00:09:27.984 { 00:09:27.984 "dma_device_id": "system", 00:09:27.984 "dma_device_type": 1 00:09:27.984 }, 00:09:27.984 { 00:09:27.984 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.984 "dma_device_type": 2 00:09:27.984 } 00:09:27.984 ], 00:09:27.984 "driver_specific": {} 00:09:27.984 } 00:09:27.984 ] 00:09:27.984 14:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.984 14:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:27.984 14:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:27.984 14:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:27.984 14:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:27.984 14:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:27.984 14:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:27.984 14:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:27.984 14:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:27.984 14:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:27.984 14:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:27.984 14:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:27.984 14:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:27.984 14:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:27.984 14:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.984 14:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.984 14:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:27.984 14:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:27.984 "name": "Existed_Raid", 00:09:27.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.984 "strip_size_kb": 0, 00:09:27.984 "state": "configuring", 00:09:27.984 "raid_level": "raid1", 00:09:27.984 "superblock": false, 00:09:27.984 "num_base_bdevs": 2, 00:09:27.984 "num_base_bdevs_discovered": 1, 00:09:27.984 "num_base_bdevs_operational": 2, 00:09:27.984 "base_bdevs_list": [ 00:09:27.984 { 00:09:27.984 "name": "BaseBdev1", 00:09:27.984 "uuid": "8fbdd1db-10a2-4d5a-b9b7-52eedecd4c36", 00:09:27.984 "is_configured": true, 00:09:27.984 "data_offset": 0, 00:09:27.984 "data_size": 65536 00:09:27.984 }, 00:09:27.984 { 00:09:27.984 "name": "BaseBdev2", 00:09:27.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.984 "is_configured": false, 00:09:27.984 "data_offset": 0, 00:09:27.984 "data_size": 0 00:09:27.984 } 00:09:27.984 ] 00:09:27.984 }' 00:09:27.984 14:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:27.984 14:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.551 14:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:28.551 14:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.551 14:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.551 [2024-11-27 14:09:05.534360] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:28.551 [2024-11-27 14:09:05.534421] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:28.551 14:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.551 14:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:28.551 14:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.551 14:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.551 [2024-11-27 14:09:05.542383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:28.551 [2024-11-27 14:09:05.544758] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:28.551 [2024-11-27 14:09:05.544822] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:28.551 14:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.551 14:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:28.551 14:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:28.551 14:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:28.551 14:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:28.551 14:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:28.551 14:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:28.551 14:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:28.551 14:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:28.551 14:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:28.551 14:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:28.551 14:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:28.551 14:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:28.551 14:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:28.551 14:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.551 14:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.551 14:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.551 14:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.551 14:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:28.551 "name": "Existed_Raid", 00:09:28.551 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.551 "strip_size_kb": 0, 00:09:28.551 "state": "configuring", 00:09:28.551 "raid_level": "raid1", 00:09:28.551 "superblock": false, 00:09:28.551 "num_base_bdevs": 2, 00:09:28.551 "num_base_bdevs_discovered": 1, 00:09:28.551 "num_base_bdevs_operational": 2, 00:09:28.551 "base_bdevs_list": [ 00:09:28.551 { 00:09:28.551 "name": "BaseBdev1", 00:09:28.551 "uuid": "8fbdd1db-10a2-4d5a-b9b7-52eedecd4c36", 00:09:28.551 "is_configured": true, 00:09:28.551 "data_offset": 0, 00:09:28.551 "data_size": 65536 00:09:28.551 }, 00:09:28.551 { 00:09:28.551 "name": "BaseBdev2", 00:09:28.552 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.552 "is_configured": false, 00:09:28.552 "data_offset": 0, 00:09:28.552 "data_size": 0 00:09:28.552 } 00:09:28.552 ] 00:09:28.552 }' 00:09:28.552 14:09:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:28.552 14:09:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.810 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:28.810 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.810 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.810 [2024-11-27 14:09:06.072441] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:28.810 [2024-11-27 14:09:06.072515] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:28.810 [2024-11-27 14:09:06.072528] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:09:28.810 [2024-11-27 14:09:06.072870] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:28.810 [2024-11-27 14:09:06.073106] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:28.810 [2024-11-27 14:09:06.073137] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:28.810 [2024-11-27 14:09:06.073448] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:28.810 BaseBdev2 00:09:28.810 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.810 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:28.810 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:28.810 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:28.810 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:28.810 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:28.810 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:28.810 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:28.810 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.810 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.810 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:28.810 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:28.810 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:28.810 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.069 [ 00:09:29.069 { 00:09:29.069 "name": "BaseBdev2", 00:09:29.069 "aliases": [ 00:09:29.069 "2f951d1a-1b49-40f2-9d65-e6c90b829a16" 00:09:29.069 ], 00:09:29.069 "product_name": "Malloc disk", 00:09:29.069 "block_size": 512, 00:09:29.069 "num_blocks": 65536, 00:09:29.069 "uuid": "2f951d1a-1b49-40f2-9d65-e6c90b829a16", 00:09:29.069 "assigned_rate_limits": { 00:09:29.069 "rw_ios_per_sec": 0, 00:09:29.069 "rw_mbytes_per_sec": 0, 00:09:29.069 "r_mbytes_per_sec": 0, 00:09:29.069 "w_mbytes_per_sec": 0 00:09:29.069 }, 00:09:29.069 "claimed": true, 00:09:29.069 "claim_type": "exclusive_write", 00:09:29.069 "zoned": false, 00:09:29.069 "supported_io_types": { 00:09:29.069 "read": true, 00:09:29.069 "write": true, 00:09:29.069 "unmap": true, 00:09:29.069 "flush": true, 00:09:29.069 "reset": true, 00:09:29.069 "nvme_admin": false, 00:09:29.069 "nvme_io": false, 00:09:29.069 "nvme_io_md": false, 00:09:29.069 "write_zeroes": true, 00:09:29.069 "zcopy": true, 00:09:29.069 "get_zone_info": false, 00:09:29.069 "zone_management": false, 00:09:29.069 "zone_append": false, 00:09:29.069 "compare": false, 00:09:29.069 "compare_and_write": false, 00:09:29.069 "abort": true, 00:09:29.069 "seek_hole": false, 00:09:29.069 "seek_data": false, 00:09:29.069 "copy": true, 00:09:29.069 "nvme_iov_md": false 00:09:29.069 }, 00:09:29.069 "memory_domains": [ 00:09:29.069 { 00:09:29.069 "dma_device_id": "system", 00:09:29.069 "dma_device_type": 1 00:09:29.069 }, 00:09:29.069 { 00:09:29.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.069 "dma_device_type": 2 00:09:29.069 } 00:09:29.069 ], 00:09:29.069 "driver_specific": {} 00:09:29.069 } 00:09:29.069 ] 00:09:29.069 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.069 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:29.069 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:29.069 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:29.069 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:29.069 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.069 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.069 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.069 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.069 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:29.069 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.069 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.069 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.069 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.069 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.069 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.069 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.069 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.069 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.069 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.069 "name": "Existed_Raid", 00:09:29.069 "uuid": "befc4878-395c-462b-9a9c-7a32733ce835", 00:09:29.069 "strip_size_kb": 0, 00:09:29.069 "state": "online", 00:09:29.069 "raid_level": "raid1", 00:09:29.069 "superblock": false, 00:09:29.069 "num_base_bdevs": 2, 00:09:29.069 "num_base_bdevs_discovered": 2, 00:09:29.069 "num_base_bdevs_operational": 2, 00:09:29.069 "base_bdevs_list": [ 00:09:29.069 { 00:09:29.069 "name": "BaseBdev1", 00:09:29.069 "uuid": "8fbdd1db-10a2-4d5a-b9b7-52eedecd4c36", 00:09:29.069 "is_configured": true, 00:09:29.069 "data_offset": 0, 00:09:29.069 "data_size": 65536 00:09:29.069 }, 00:09:29.069 { 00:09:29.069 "name": "BaseBdev2", 00:09:29.069 "uuid": "2f951d1a-1b49-40f2-9d65-e6c90b829a16", 00:09:29.069 "is_configured": true, 00:09:29.069 "data_offset": 0, 00:09:29.069 "data_size": 65536 00:09:29.069 } 00:09:29.069 ] 00:09:29.069 }' 00:09:29.069 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.069 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.637 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:29.637 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:29.637 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:29.637 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:29.637 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:29.637 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:29.637 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:29.637 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.637 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.637 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:29.637 [2024-11-27 14:09:06.616988] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:29.637 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.637 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:29.637 "name": "Existed_Raid", 00:09:29.637 "aliases": [ 00:09:29.637 "befc4878-395c-462b-9a9c-7a32733ce835" 00:09:29.637 ], 00:09:29.637 "product_name": "Raid Volume", 00:09:29.637 "block_size": 512, 00:09:29.637 "num_blocks": 65536, 00:09:29.637 "uuid": "befc4878-395c-462b-9a9c-7a32733ce835", 00:09:29.637 "assigned_rate_limits": { 00:09:29.637 "rw_ios_per_sec": 0, 00:09:29.637 "rw_mbytes_per_sec": 0, 00:09:29.637 "r_mbytes_per_sec": 0, 00:09:29.637 "w_mbytes_per_sec": 0 00:09:29.637 }, 00:09:29.637 "claimed": false, 00:09:29.637 "zoned": false, 00:09:29.637 "supported_io_types": { 00:09:29.637 "read": true, 00:09:29.637 "write": true, 00:09:29.637 "unmap": false, 00:09:29.637 "flush": false, 00:09:29.637 "reset": true, 00:09:29.637 "nvme_admin": false, 00:09:29.637 "nvme_io": false, 00:09:29.637 "nvme_io_md": false, 00:09:29.637 "write_zeroes": true, 00:09:29.637 "zcopy": false, 00:09:29.637 "get_zone_info": false, 00:09:29.637 "zone_management": false, 00:09:29.637 "zone_append": false, 00:09:29.637 "compare": false, 00:09:29.637 "compare_and_write": false, 00:09:29.637 "abort": false, 00:09:29.637 "seek_hole": false, 00:09:29.637 "seek_data": false, 00:09:29.637 "copy": false, 00:09:29.637 "nvme_iov_md": false 00:09:29.637 }, 00:09:29.637 "memory_domains": [ 00:09:29.637 { 00:09:29.637 "dma_device_id": "system", 00:09:29.637 "dma_device_type": 1 00:09:29.637 }, 00:09:29.637 { 00:09:29.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.637 "dma_device_type": 2 00:09:29.637 }, 00:09:29.637 { 00:09:29.637 "dma_device_id": "system", 00:09:29.637 "dma_device_type": 1 00:09:29.637 }, 00:09:29.637 { 00:09:29.637 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:29.637 "dma_device_type": 2 00:09:29.637 } 00:09:29.637 ], 00:09:29.637 "driver_specific": { 00:09:29.637 "raid": { 00:09:29.637 "uuid": "befc4878-395c-462b-9a9c-7a32733ce835", 00:09:29.637 "strip_size_kb": 0, 00:09:29.637 "state": "online", 00:09:29.637 "raid_level": "raid1", 00:09:29.637 "superblock": false, 00:09:29.637 "num_base_bdevs": 2, 00:09:29.637 "num_base_bdevs_discovered": 2, 00:09:29.637 "num_base_bdevs_operational": 2, 00:09:29.637 "base_bdevs_list": [ 00:09:29.637 { 00:09:29.637 "name": "BaseBdev1", 00:09:29.637 "uuid": "8fbdd1db-10a2-4d5a-b9b7-52eedecd4c36", 00:09:29.637 "is_configured": true, 00:09:29.637 "data_offset": 0, 00:09:29.637 "data_size": 65536 00:09:29.637 }, 00:09:29.637 { 00:09:29.638 "name": "BaseBdev2", 00:09:29.638 "uuid": "2f951d1a-1b49-40f2-9d65-e6c90b829a16", 00:09:29.638 "is_configured": true, 00:09:29.638 "data_offset": 0, 00:09:29.638 "data_size": 65536 00:09:29.638 } 00:09:29.638 ] 00:09:29.638 } 00:09:29.638 } 00:09:29.638 }' 00:09:29.638 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:29.638 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:29.638 BaseBdev2' 00:09:29.638 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.638 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:29.638 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.638 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.638 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:29.638 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.638 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.638 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.638 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.638 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.638 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:29.638 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:29.638 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.638 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.638 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:29.638 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.638 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:29.638 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:29.638 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:29.638 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.638 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.638 [2024-11-27 14:09:06.872737] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:29.896 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.896 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:29.896 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:29.896 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:29.896 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:29.896 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:29.896 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:09:29.896 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:29.896 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:29.896 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:29.896 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:29.896 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:29.896 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:29.896 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:29.896 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:29.896 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:29.896 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.896 14:09:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:29.896 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:29.896 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:29.896 14:09:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:29.896 14:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:29.896 "name": "Existed_Raid", 00:09:29.896 "uuid": "befc4878-395c-462b-9a9c-7a32733ce835", 00:09:29.896 "strip_size_kb": 0, 00:09:29.896 "state": "online", 00:09:29.896 "raid_level": "raid1", 00:09:29.896 "superblock": false, 00:09:29.896 "num_base_bdevs": 2, 00:09:29.896 "num_base_bdevs_discovered": 1, 00:09:29.896 "num_base_bdevs_operational": 1, 00:09:29.896 "base_bdevs_list": [ 00:09:29.896 { 00:09:29.896 "name": null, 00:09:29.896 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.896 "is_configured": false, 00:09:29.896 "data_offset": 0, 00:09:29.896 "data_size": 65536 00:09:29.896 }, 00:09:29.896 { 00:09:29.896 "name": "BaseBdev2", 00:09:29.896 "uuid": "2f951d1a-1b49-40f2-9d65-e6c90b829a16", 00:09:29.896 "is_configured": true, 00:09:29.896 "data_offset": 0, 00:09:29.896 "data_size": 65536 00:09:29.896 } 00:09:29.896 ] 00:09:29.896 }' 00:09:29.896 14:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:29.896 14:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.463 14:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:30.463 14:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:30.463 14:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:30.463 14:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.463 14:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.463 14:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.463 14:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.463 14:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:30.463 14:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:30.463 14:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:30.463 14:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.463 14:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.463 [2024-11-27 14:09:07.561804] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:30.463 [2024-11-27 14:09:07.561942] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:30.463 [2024-11-27 14:09:07.648002] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:30.463 [2024-11-27 14:09:07.648067] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:30.463 [2024-11-27 14:09:07.648094] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:30.463 14:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.463 14:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:30.463 14:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:30.463 14:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:30.463 14:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:30.463 14:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:30.463 14:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.463 14:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:30.463 14:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:30.463 14:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:30.463 14:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:30.463 14:09:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 62566 00:09:30.463 14:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 62566 ']' 00:09:30.463 14:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 62566 00:09:30.463 14:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:09:30.463 14:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.463 14:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62566 00:09:30.463 killing process with pid 62566 00:09:30.463 14:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:30.463 14:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:30.463 14:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62566' 00:09:30.463 14:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 62566 00:09:30.463 [2024-11-27 14:09:07.737463] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:30.463 14:09:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 62566 00:09:30.721 [2024-11-27 14:09:07.752090] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:31.654 ************************************ 00:09:31.654 END TEST raid_state_function_test 00:09:31.654 ************************************ 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:09:31.654 00:09:31.654 real 0m5.459s 00:09:31.654 user 0m8.292s 00:09:31.654 sys 0m0.720s 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.654 14:09:08 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:09:31.654 14:09:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:31.654 14:09:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.654 14:09:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:31.654 ************************************ 00:09:31.654 START TEST raid_state_function_test_sb 00:09:31.654 ************************************ 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=62819 00:09:31.654 Process raid pid: 62819 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 62819' 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 62819 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 62819 ']' 00:09:31.654 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:31.654 14:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:31.912 [2024-11-27 14:09:08.967402] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:09:31.912 [2024-11-27 14:09:08.967601] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:31.912 [2024-11-27 14:09:09.153130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:32.170 [2024-11-27 14:09:09.287631] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:32.428 [2024-11-27 14:09:09.495642] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.428 [2024-11-27 14:09:09.495729] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:32.687 14:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.687 14:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:09:32.687 14:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:32.687 14:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.687 14:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.687 [2024-11-27 14:09:09.926550] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:32.687 [2024-11-27 14:09:09.926678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:32.687 [2024-11-27 14:09:09.926709] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:32.687 [2024-11-27 14:09:09.926743] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:32.687 14:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.687 14:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:32.687 14:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:32.687 14:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:32.687 14:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:32.687 14:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:32.687 14:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:32.687 14:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:32.687 14:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:32.687 14:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:32.687 14:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:32.687 14:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:32.687 14:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:32.687 14:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.687 14:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:32.687 14:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.950 14:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:32.950 "name": "Existed_Raid", 00:09:32.950 "uuid": "5761be32-ab9d-44ea-8abb-046d39abbd1d", 00:09:32.950 "strip_size_kb": 0, 00:09:32.950 "state": "configuring", 00:09:32.950 "raid_level": "raid1", 00:09:32.950 "superblock": true, 00:09:32.950 "num_base_bdevs": 2, 00:09:32.950 "num_base_bdevs_discovered": 0, 00:09:32.950 "num_base_bdevs_operational": 2, 00:09:32.950 "base_bdevs_list": [ 00:09:32.950 { 00:09:32.950 "name": "BaseBdev1", 00:09:32.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.950 "is_configured": false, 00:09:32.950 "data_offset": 0, 00:09:32.950 "data_size": 0 00:09:32.950 }, 00:09:32.950 { 00:09:32.950 "name": "BaseBdev2", 00:09:32.950 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:32.950 "is_configured": false, 00:09:32.950 "data_offset": 0, 00:09:32.950 "data_size": 0 00:09:32.950 } 00:09:32.950 ] 00:09:32.950 }' 00:09:32.950 14:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:32.950 14:09:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.208 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:33.208 14:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.208 14:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.208 [2024-11-27 14:09:10.406619] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:33.208 [2024-11-27 14:09:10.406663] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:33.208 14:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.208 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:33.208 14:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.208 14:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.208 [2024-11-27 14:09:10.414588] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:33.208 [2024-11-27 14:09:10.414657] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:33.208 [2024-11-27 14:09:10.414671] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:33.208 [2024-11-27 14:09:10.414689] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:33.208 14:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.208 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:33.208 14:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.208 14:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.208 [2024-11-27 14:09:10.460742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:33.208 BaseBdev1 00:09:33.208 14:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.208 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:33.208 14:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:33.208 14:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:33.208 14:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:33.208 14:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:33.208 14:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:33.208 14:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:33.208 14:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.208 14:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.208 14:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.208 14:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:33.208 14:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.208 14:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.208 [ 00:09:33.208 { 00:09:33.208 "name": "BaseBdev1", 00:09:33.208 "aliases": [ 00:09:33.208 "5a11210d-721c-42af-802b-b89ce6f447d9" 00:09:33.208 ], 00:09:33.208 "product_name": "Malloc disk", 00:09:33.208 "block_size": 512, 00:09:33.208 "num_blocks": 65536, 00:09:33.208 "uuid": "5a11210d-721c-42af-802b-b89ce6f447d9", 00:09:33.208 "assigned_rate_limits": { 00:09:33.208 "rw_ios_per_sec": 0, 00:09:33.208 "rw_mbytes_per_sec": 0, 00:09:33.208 "r_mbytes_per_sec": 0, 00:09:33.208 "w_mbytes_per_sec": 0 00:09:33.208 }, 00:09:33.208 "claimed": true, 00:09:33.208 "claim_type": "exclusive_write", 00:09:33.208 "zoned": false, 00:09:33.208 "supported_io_types": { 00:09:33.208 "read": true, 00:09:33.208 "write": true, 00:09:33.208 "unmap": true, 00:09:33.208 "flush": true, 00:09:33.208 "reset": true, 00:09:33.208 "nvme_admin": false, 00:09:33.208 "nvme_io": false, 00:09:33.208 "nvme_io_md": false, 00:09:33.208 "write_zeroes": true, 00:09:33.208 "zcopy": true, 00:09:33.208 "get_zone_info": false, 00:09:33.208 "zone_management": false, 00:09:33.208 "zone_append": false, 00:09:33.208 "compare": false, 00:09:33.208 "compare_and_write": false, 00:09:33.208 "abort": true, 00:09:33.208 "seek_hole": false, 00:09:33.208 "seek_data": false, 00:09:33.208 "copy": true, 00:09:33.208 "nvme_iov_md": false 00:09:33.208 }, 00:09:33.208 "memory_domains": [ 00:09:33.208 { 00:09:33.208 "dma_device_id": "system", 00:09:33.208 "dma_device_type": 1 00:09:33.208 }, 00:09:33.208 { 00:09:33.208 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.209 "dma_device_type": 2 00:09:33.209 } 00:09:33.209 ], 00:09:33.209 "driver_specific": {} 00:09:33.209 } 00:09:33.209 ] 00:09:33.209 14:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.209 14:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:33.209 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:33.467 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:33.467 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:33.467 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:33.467 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:33.467 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:33.467 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:33.467 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:33.467 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:33.467 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:33.467 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:33.467 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.467 14:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.467 14:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:33.467 14:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.467 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:33.467 "name": "Existed_Raid", 00:09:33.467 "uuid": "981110bc-420b-4ee7-b96a-e59a2452c85f", 00:09:33.467 "strip_size_kb": 0, 00:09:33.467 "state": "configuring", 00:09:33.467 "raid_level": "raid1", 00:09:33.467 "superblock": true, 00:09:33.467 "num_base_bdevs": 2, 00:09:33.467 "num_base_bdevs_discovered": 1, 00:09:33.467 "num_base_bdevs_operational": 2, 00:09:33.467 "base_bdevs_list": [ 00:09:33.467 { 00:09:33.467 "name": "BaseBdev1", 00:09:33.467 "uuid": "5a11210d-721c-42af-802b-b89ce6f447d9", 00:09:33.467 "is_configured": true, 00:09:33.467 "data_offset": 2048, 00:09:33.467 "data_size": 63488 00:09:33.467 }, 00:09:33.467 { 00:09:33.467 "name": "BaseBdev2", 00:09:33.467 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:33.467 "is_configured": false, 00:09:33.467 "data_offset": 0, 00:09:33.467 "data_size": 0 00:09:33.467 } 00:09:33.467 ] 00:09:33.467 }' 00:09:33.467 14:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:33.467 14:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.033 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:34.033 14:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.033 14:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.033 [2024-11-27 14:09:11.008960] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:34.033 [2024-11-27 14:09:11.009033] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:34.033 14:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.033 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:09:34.033 14:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.033 14:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.033 [2024-11-27 14:09:11.016996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:34.033 [2024-11-27 14:09:11.019545] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:34.033 [2024-11-27 14:09:11.019603] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:34.033 14:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.033 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:34.033 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.033 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:09:34.033 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.033 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:34.033 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.033 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.033 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:34.033 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.033 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.033 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.033 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.033 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.033 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.033 14:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.033 14:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.033 14:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.033 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.033 "name": "Existed_Raid", 00:09:34.033 "uuid": "9a8d92f5-2f25-40d0-8640-d6dc623ee6f5", 00:09:34.033 "strip_size_kb": 0, 00:09:34.033 "state": "configuring", 00:09:34.033 "raid_level": "raid1", 00:09:34.033 "superblock": true, 00:09:34.033 "num_base_bdevs": 2, 00:09:34.033 "num_base_bdevs_discovered": 1, 00:09:34.033 "num_base_bdevs_operational": 2, 00:09:34.033 "base_bdevs_list": [ 00:09:34.033 { 00:09:34.033 "name": "BaseBdev1", 00:09:34.033 "uuid": "5a11210d-721c-42af-802b-b89ce6f447d9", 00:09:34.033 "is_configured": true, 00:09:34.033 "data_offset": 2048, 00:09:34.033 "data_size": 63488 00:09:34.033 }, 00:09:34.033 { 00:09:34.033 "name": "BaseBdev2", 00:09:34.033 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:34.033 "is_configured": false, 00:09:34.033 "data_offset": 0, 00:09:34.033 "data_size": 0 00:09:34.033 } 00:09:34.033 ] 00:09:34.033 }' 00:09:34.033 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.033 14:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.291 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:34.291 14:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.291 14:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.549 [2024-11-27 14:09:11.571684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:34.549 [2024-11-27 14:09:11.572058] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:34.549 [2024-11-27 14:09:11.572079] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:34.549 [2024-11-27 14:09:11.572407] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:34.549 BaseBdev2 00:09:34.549 [2024-11-27 14:09:11.572612] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:34.549 [2024-11-27 14:09:11.572635] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:34.549 [2024-11-27 14:09:11.572840] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:34.549 14:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.549 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:34.549 14:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:34.549 14:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:34.549 14:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:09:34.549 14:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:34.549 14:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:34.549 14:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:34.549 14:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.549 14:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.549 14:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.549 14:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:34.549 14:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.549 14:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.549 [ 00:09:34.549 { 00:09:34.549 "name": "BaseBdev2", 00:09:34.549 "aliases": [ 00:09:34.549 "fc217c36-5b25-444a-b889-a8ffa31b8b4f" 00:09:34.549 ], 00:09:34.549 "product_name": "Malloc disk", 00:09:34.549 "block_size": 512, 00:09:34.549 "num_blocks": 65536, 00:09:34.549 "uuid": "fc217c36-5b25-444a-b889-a8ffa31b8b4f", 00:09:34.549 "assigned_rate_limits": { 00:09:34.549 "rw_ios_per_sec": 0, 00:09:34.549 "rw_mbytes_per_sec": 0, 00:09:34.549 "r_mbytes_per_sec": 0, 00:09:34.549 "w_mbytes_per_sec": 0 00:09:34.549 }, 00:09:34.549 "claimed": true, 00:09:34.549 "claim_type": "exclusive_write", 00:09:34.549 "zoned": false, 00:09:34.549 "supported_io_types": { 00:09:34.549 "read": true, 00:09:34.549 "write": true, 00:09:34.549 "unmap": true, 00:09:34.549 "flush": true, 00:09:34.549 "reset": true, 00:09:34.549 "nvme_admin": false, 00:09:34.549 "nvme_io": false, 00:09:34.549 "nvme_io_md": false, 00:09:34.549 "write_zeroes": true, 00:09:34.549 "zcopy": true, 00:09:34.549 "get_zone_info": false, 00:09:34.549 "zone_management": false, 00:09:34.549 "zone_append": false, 00:09:34.549 "compare": false, 00:09:34.549 "compare_and_write": false, 00:09:34.549 "abort": true, 00:09:34.549 "seek_hole": false, 00:09:34.549 "seek_data": false, 00:09:34.549 "copy": true, 00:09:34.549 "nvme_iov_md": false 00:09:34.549 }, 00:09:34.549 "memory_domains": [ 00:09:34.549 { 00:09:34.549 "dma_device_id": "system", 00:09:34.549 "dma_device_type": 1 00:09:34.549 }, 00:09:34.549 { 00:09:34.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.549 "dma_device_type": 2 00:09:34.549 } 00:09:34.549 ], 00:09:34.549 "driver_specific": {} 00:09:34.549 } 00:09:34.549 ] 00:09:34.549 14:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.549 14:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:09:34.549 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:34.549 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:34.549 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:09:34.549 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:34.549 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:34.549 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:34.549 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:34.549 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:34.549 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:34.549 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:34.550 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:34.550 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:34.550 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:34.550 14:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.550 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:34.550 14:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.550 14:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:34.550 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:34.550 "name": "Existed_Raid", 00:09:34.550 "uuid": "9a8d92f5-2f25-40d0-8640-d6dc623ee6f5", 00:09:34.550 "strip_size_kb": 0, 00:09:34.550 "state": "online", 00:09:34.550 "raid_level": "raid1", 00:09:34.550 "superblock": true, 00:09:34.550 "num_base_bdevs": 2, 00:09:34.550 "num_base_bdevs_discovered": 2, 00:09:34.550 "num_base_bdevs_operational": 2, 00:09:34.550 "base_bdevs_list": [ 00:09:34.550 { 00:09:34.550 "name": "BaseBdev1", 00:09:34.550 "uuid": "5a11210d-721c-42af-802b-b89ce6f447d9", 00:09:34.550 "is_configured": true, 00:09:34.550 "data_offset": 2048, 00:09:34.550 "data_size": 63488 00:09:34.550 }, 00:09:34.550 { 00:09:34.550 "name": "BaseBdev2", 00:09:34.550 "uuid": "fc217c36-5b25-444a-b889-a8ffa31b8b4f", 00:09:34.550 "is_configured": true, 00:09:34.550 "data_offset": 2048, 00:09:34.550 "data_size": 63488 00:09:34.550 } 00:09:34.550 ] 00:09:34.550 }' 00:09:34.550 14:09:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:34.550 14:09:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:34.807 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:34.807 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:34.807 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:34.807 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:34.807 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:09:34.807 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:34.807 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:34.808 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:34.808 14:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:34.808 14:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.066 [2024-11-27 14:09:12.088222] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:35.066 14:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.066 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:35.066 "name": "Existed_Raid", 00:09:35.066 "aliases": [ 00:09:35.066 "9a8d92f5-2f25-40d0-8640-d6dc623ee6f5" 00:09:35.066 ], 00:09:35.066 "product_name": "Raid Volume", 00:09:35.066 "block_size": 512, 00:09:35.066 "num_blocks": 63488, 00:09:35.066 "uuid": "9a8d92f5-2f25-40d0-8640-d6dc623ee6f5", 00:09:35.066 "assigned_rate_limits": { 00:09:35.066 "rw_ios_per_sec": 0, 00:09:35.066 "rw_mbytes_per_sec": 0, 00:09:35.066 "r_mbytes_per_sec": 0, 00:09:35.066 "w_mbytes_per_sec": 0 00:09:35.066 }, 00:09:35.066 "claimed": false, 00:09:35.066 "zoned": false, 00:09:35.066 "supported_io_types": { 00:09:35.066 "read": true, 00:09:35.066 "write": true, 00:09:35.066 "unmap": false, 00:09:35.066 "flush": false, 00:09:35.066 "reset": true, 00:09:35.066 "nvme_admin": false, 00:09:35.066 "nvme_io": false, 00:09:35.066 "nvme_io_md": false, 00:09:35.066 "write_zeroes": true, 00:09:35.066 "zcopy": false, 00:09:35.066 "get_zone_info": false, 00:09:35.066 "zone_management": false, 00:09:35.066 "zone_append": false, 00:09:35.066 "compare": false, 00:09:35.066 "compare_and_write": false, 00:09:35.066 "abort": false, 00:09:35.066 "seek_hole": false, 00:09:35.066 "seek_data": false, 00:09:35.066 "copy": false, 00:09:35.066 "nvme_iov_md": false 00:09:35.066 }, 00:09:35.066 "memory_domains": [ 00:09:35.066 { 00:09:35.066 "dma_device_id": "system", 00:09:35.066 "dma_device_type": 1 00:09:35.066 }, 00:09:35.066 { 00:09:35.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.066 "dma_device_type": 2 00:09:35.066 }, 00:09:35.066 { 00:09:35.066 "dma_device_id": "system", 00:09:35.066 "dma_device_type": 1 00:09:35.066 }, 00:09:35.066 { 00:09:35.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.066 "dma_device_type": 2 00:09:35.066 } 00:09:35.066 ], 00:09:35.066 "driver_specific": { 00:09:35.066 "raid": { 00:09:35.066 "uuid": "9a8d92f5-2f25-40d0-8640-d6dc623ee6f5", 00:09:35.066 "strip_size_kb": 0, 00:09:35.066 "state": "online", 00:09:35.066 "raid_level": "raid1", 00:09:35.066 "superblock": true, 00:09:35.066 "num_base_bdevs": 2, 00:09:35.066 "num_base_bdevs_discovered": 2, 00:09:35.066 "num_base_bdevs_operational": 2, 00:09:35.066 "base_bdevs_list": [ 00:09:35.066 { 00:09:35.066 "name": "BaseBdev1", 00:09:35.066 "uuid": "5a11210d-721c-42af-802b-b89ce6f447d9", 00:09:35.066 "is_configured": true, 00:09:35.066 "data_offset": 2048, 00:09:35.066 "data_size": 63488 00:09:35.066 }, 00:09:35.066 { 00:09:35.066 "name": "BaseBdev2", 00:09:35.066 "uuid": "fc217c36-5b25-444a-b889-a8ffa31b8b4f", 00:09:35.066 "is_configured": true, 00:09:35.066 "data_offset": 2048, 00:09:35.066 "data_size": 63488 00:09:35.066 } 00:09:35.066 ] 00:09:35.066 } 00:09:35.066 } 00:09:35.066 }' 00:09:35.066 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:35.066 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:35.066 BaseBdev2' 00:09:35.066 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.066 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:35.066 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.066 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.066 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:35.066 14:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.066 14:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.066 14:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.066 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.066 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.066 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:35.066 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:35.066 14:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.066 14:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.066 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:35.066 14:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.066 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:35.066 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:35.066 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:35.066 14:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.066 14:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.066 [2024-11-27 14:09:12.340006] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:35.324 14:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.324 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:35.324 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:09:35.324 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:35.324 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:09:35.324 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:09:35.324 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:09:35.324 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:35.324 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:35.324 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:35.324 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:35.324 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:35.324 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:35.324 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:35.324 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:35.324 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:35.324 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:35.324 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.324 14:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.324 14:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.325 14:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.325 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:35.325 "name": "Existed_Raid", 00:09:35.325 "uuid": "9a8d92f5-2f25-40d0-8640-d6dc623ee6f5", 00:09:35.325 "strip_size_kb": 0, 00:09:35.325 "state": "online", 00:09:35.325 "raid_level": "raid1", 00:09:35.325 "superblock": true, 00:09:35.325 "num_base_bdevs": 2, 00:09:35.325 "num_base_bdevs_discovered": 1, 00:09:35.325 "num_base_bdevs_operational": 1, 00:09:35.325 "base_bdevs_list": [ 00:09:35.325 { 00:09:35.325 "name": null, 00:09:35.325 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:35.325 "is_configured": false, 00:09:35.325 "data_offset": 0, 00:09:35.325 "data_size": 63488 00:09:35.325 }, 00:09:35.325 { 00:09:35.325 "name": "BaseBdev2", 00:09:35.325 "uuid": "fc217c36-5b25-444a-b889-a8ffa31b8b4f", 00:09:35.325 "is_configured": true, 00:09:35.325 "data_offset": 2048, 00:09:35.325 "data_size": 63488 00:09:35.325 } 00:09:35.325 ] 00:09:35.325 }' 00:09:35.325 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:35.325 14:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.890 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:35.890 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:35.890 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.890 14:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.891 14:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.891 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:35.891 14:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.891 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:35.891 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:35.891 14:09:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:35.891 14:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.891 14:09:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.891 [2024-11-27 14:09:12.955019] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:35.891 [2024-11-27 14:09:12.955150] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:35.891 [2024-11-27 14:09:13.042637] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.891 [2024-11-27 14:09:13.042716] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:35.891 [2024-11-27 14:09:13.042736] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:35.891 14:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.891 14:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:35.891 14:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:35.891 14:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:35.891 14:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:35.891 14:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:35.891 14:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.891 14:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:35.891 14:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:35.891 14:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:35.891 14:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:09:35.891 14:09:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 62819 00:09:35.891 14:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 62819 ']' 00:09:35.891 14:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 62819 00:09:35.891 14:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:09:35.891 14:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:35.891 14:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62819 00:09:35.891 14:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:35.891 killing process with pid 62819 00:09:35.891 14:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:35.891 14:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62819' 00:09:35.891 14:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 62819 00:09:35.891 [2024-11-27 14:09:13.131851] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:35.891 14:09:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 62819 00:09:35.891 [2024-11-27 14:09:13.146581] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:37.263 14:09:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:09:37.263 00:09:37.263 real 0m5.345s 00:09:37.263 user 0m8.020s 00:09:37.263 sys 0m0.773s 00:09:37.263 14:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:37.263 ************************************ 00:09:37.263 END TEST raid_state_function_test_sb 00:09:37.263 ************************************ 00:09:37.263 14:09:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.263 14:09:14 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:09:37.263 14:09:14 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:37.263 14:09:14 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:37.263 14:09:14 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:37.263 ************************************ 00:09:37.263 START TEST raid_superblock_test 00:09:37.263 ************************************ 00:09:37.263 14:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:09:37.263 14:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:09:37.263 14:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:09:37.263 14:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:09:37.263 14:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:09:37.263 14:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:09:37.263 14:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:09:37.263 14:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:09:37.263 14:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:09:37.263 14:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:09:37.263 14:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:09:37.263 14:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:09:37.263 14:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:09:37.263 14:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:09:37.263 14:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:09:37.263 14:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:09:37.263 14:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=63071 00:09:37.263 14:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:09:37.263 14:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 63071 00:09:37.263 14:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 63071 ']' 00:09:37.263 14:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:37.263 14:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:37.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:37.263 14:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:37.263 14:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:37.263 14:09:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:37.263 [2024-11-27 14:09:14.378824] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:09:37.264 [2024-11-27 14:09:14.379048] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63071 ] 00:09:37.521 [2024-11-27 14:09:14.566199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:37.521 [2024-11-27 14:09:14.716872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:37.779 [2024-11-27 14:09:14.922149] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:37.779 [2024-11-27 14:09:14.922229] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:38.037 14:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:38.037 14:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:09:38.037 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:09:38.037 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:38.037 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:09:38.037 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:09:38.312 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:09:38.312 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:38.312 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:38.312 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:38.312 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:09:38.312 14:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.312 14:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.312 malloc1 00:09:38.312 14:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.312 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:38.312 14:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.312 14:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.312 [2024-11-27 14:09:15.364981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:38.312 [2024-11-27 14:09:15.365055] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.312 [2024-11-27 14:09:15.365089] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:09:38.312 [2024-11-27 14:09:15.365105] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.312 [2024-11-27 14:09:15.367967] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.312 [2024-11-27 14:09:15.368014] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:38.312 pt1 00:09:38.312 14:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.312 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:38.312 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:38.312 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:09:38.312 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:09:38.312 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:09:38.312 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:09:38.312 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:09:38.312 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:09:38.312 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:09:38.312 14:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.312 14:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.312 malloc2 00:09:38.312 14:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.312 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:38.313 14:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.313 14:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.313 [2024-11-27 14:09:15.421730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:38.313 [2024-11-27 14:09:15.421815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:38.313 [2024-11-27 14:09:15.421855] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:09:38.313 [2024-11-27 14:09:15.421886] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:38.313 [2024-11-27 14:09:15.424672] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:38.313 [2024-11-27 14:09:15.424721] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:38.313 pt2 00:09:38.313 14:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.313 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:09:38.313 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:09:38.313 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:09:38.313 14:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.313 14:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.313 [2024-11-27 14:09:15.433815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:38.313 [2024-11-27 14:09:15.436305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:38.313 [2024-11-27 14:09:15.436534] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:09:38.313 [2024-11-27 14:09:15.436558] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:38.313 [2024-11-27 14:09:15.436918] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:09:38.313 [2024-11-27 14:09:15.437133] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:09:38.313 [2024-11-27 14:09:15.437159] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:09:38.313 [2024-11-27 14:09:15.437355] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:38.313 14:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.313 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:38.313 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:38.313 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:38.313 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:38.313 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:38.313 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:38.313 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:38.313 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:38.313 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:38.313 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:38.313 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:38.313 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:38.313 14:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.313 14:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.313 14:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.313 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:38.313 "name": "raid_bdev1", 00:09:38.313 "uuid": "b6da41bf-f5ac-4146-b54a-b448203a7ab4", 00:09:38.313 "strip_size_kb": 0, 00:09:38.313 "state": "online", 00:09:38.313 "raid_level": "raid1", 00:09:38.313 "superblock": true, 00:09:38.313 "num_base_bdevs": 2, 00:09:38.313 "num_base_bdevs_discovered": 2, 00:09:38.313 "num_base_bdevs_operational": 2, 00:09:38.313 "base_bdevs_list": [ 00:09:38.313 { 00:09:38.313 "name": "pt1", 00:09:38.313 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:38.313 "is_configured": true, 00:09:38.313 "data_offset": 2048, 00:09:38.313 "data_size": 63488 00:09:38.313 }, 00:09:38.313 { 00:09:38.313 "name": "pt2", 00:09:38.313 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:38.313 "is_configured": true, 00:09:38.313 "data_offset": 2048, 00:09:38.313 "data_size": 63488 00:09:38.313 } 00:09:38.313 ] 00:09:38.313 }' 00:09:38.313 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:38.313 14:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.887 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:09:38.887 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:38.887 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:38.887 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:38.887 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:38.887 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:38.887 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:38.887 14:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.887 14:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.887 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:38.887 [2024-11-27 14:09:15.914259] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:38.887 14:09:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.887 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:38.887 "name": "raid_bdev1", 00:09:38.887 "aliases": [ 00:09:38.887 "b6da41bf-f5ac-4146-b54a-b448203a7ab4" 00:09:38.887 ], 00:09:38.887 "product_name": "Raid Volume", 00:09:38.887 "block_size": 512, 00:09:38.887 "num_blocks": 63488, 00:09:38.887 "uuid": "b6da41bf-f5ac-4146-b54a-b448203a7ab4", 00:09:38.887 "assigned_rate_limits": { 00:09:38.887 "rw_ios_per_sec": 0, 00:09:38.887 "rw_mbytes_per_sec": 0, 00:09:38.887 "r_mbytes_per_sec": 0, 00:09:38.887 "w_mbytes_per_sec": 0 00:09:38.887 }, 00:09:38.887 "claimed": false, 00:09:38.887 "zoned": false, 00:09:38.887 "supported_io_types": { 00:09:38.887 "read": true, 00:09:38.887 "write": true, 00:09:38.887 "unmap": false, 00:09:38.887 "flush": false, 00:09:38.887 "reset": true, 00:09:38.887 "nvme_admin": false, 00:09:38.887 "nvme_io": false, 00:09:38.887 "nvme_io_md": false, 00:09:38.887 "write_zeroes": true, 00:09:38.887 "zcopy": false, 00:09:38.887 "get_zone_info": false, 00:09:38.887 "zone_management": false, 00:09:38.887 "zone_append": false, 00:09:38.887 "compare": false, 00:09:38.887 "compare_and_write": false, 00:09:38.887 "abort": false, 00:09:38.887 "seek_hole": false, 00:09:38.887 "seek_data": false, 00:09:38.887 "copy": false, 00:09:38.887 "nvme_iov_md": false 00:09:38.887 }, 00:09:38.887 "memory_domains": [ 00:09:38.887 { 00:09:38.887 "dma_device_id": "system", 00:09:38.887 "dma_device_type": 1 00:09:38.887 }, 00:09:38.887 { 00:09:38.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.887 "dma_device_type": 2 00:09:38.887 }, 00:09:38.887 { 00:09:38.887 "dma_device_id": "system", 00:09:38.887 "dma_device_type": 1 00:09:38.887 }, 00:09:38.887 { 00:09:38.887 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.887 "dma_device_type": 2 00:09:38.887 } 00:09:38.887 ], 00:09:38.887 "driver_specific": { 00:09:38.887 "raid": { 00:09:38.887 "uuid": "b6da41bf-f5ac-4146-b54a-b448203a7ab4", 00:09:38.887 "strip_size_kb": 0, 00:09:38.887 "state": "online", 00:09:38.887 "raid_level": "raid1", 00:09:38.887 "superblock": true, 00:09:38.887 "num_base_bdevs": 2, 00:09:38.887 "num_base_bdevs_discovered": 2, 00:09:38.888 "num_base_bdevs_operational": 2, 00:09:38.888 "base_bdevs_list": [ 00:09:38.888 { 00:09:38.888 "name": "pt1", 00:09:38.888 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:38.888 "is_configured": true, 00:09:38.888 "data_offset": 2048, 00:09:38.888 "data_size": 63488 00:09:38.888 }, 00:09:38.888 { 00:09:38.888 "name": "pt2", 00:09:38.888 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:38.888 "is_configured": true, 00:09:38.888 "data_offset": 2048, 00:09:38.888 "data_size": 63488 00:09:38.888 } 00:09:38.888 ] 00:09:38.888 } 00:09:38.888 } 00:09:38.888 }' 00:09:38.888 14:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:38.888 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:38.888 pt2' 00:09:38.888 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.888 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:38.888 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.888 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:38.888 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.888 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.888 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.888 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.888 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:38.888 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:38.888 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:38.888 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:38.888 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:38.888 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.888 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:38.888 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.146 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:39.146 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:39.146 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:39.146 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.146 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:09:39.146 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.146 [2024-11-27 14:09:16.178311] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:39.146 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.146 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b6da41bf-f5ac-4146-b54a-b448203a7ab4 00:09:39.146 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b6da41bf-f5ac-4146-b54a-b448203a7ab4 ']' 00:09:39.146 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:39.146 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.146 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.146 [2024-11-27 14:09:16.225962] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:39.146 [2024-11-27 14:09:16.225998] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:39.146 [2024-11-27 14:09:16.226113] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:39.146 [2024-11-27 14:09:16.226192] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:39.146 [2024-11-27 14:09:16.226212] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:09:39.146 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.146 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.146 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.146 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.146 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:09:39.146 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.146 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:09:39.146 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:09:39.146 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:39.146 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:09:39.146 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.146 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.146 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.147 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:09:39.147 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:09:39.147 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.147 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.147 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.147 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:09:39.147 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.147 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.147 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:09:39.147 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.147 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:09:39.147 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:39.147 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:09:39.147 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:39.147 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:09:39.147 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:39.147 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:09:39.147 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:39.147 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:09:39.147 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.147 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.147 [2024-11-27 14:09:16.362043] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:09:39.147 [2024-11-27 14:09:16.364541] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:09:39.147 [2024-11-27 14:09:16.364636] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:09:39.147 [2024-11-27 14:09:16.364712] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:09:39.147 [2024-11-27 14:09:16.364739] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:39.147 [2024-11-27 14:09:16.364755] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:09:39.147 request: 00:09:39.147 { 00:09:39.147 "name": "raid_bdev1", 00:09:39.147 "raid_level": "raid1", 00:09:39.147 "base_bdevs": [ 00:09:39.147 "malloc1", 00:09:39.147 "malloc2" 00:09:39.147 ], 00:09:39.147 "superblock": false, 00:09:39.147 "method": "bdev_raid_create", 00:09:39.147 "req_id": 1 00:09:39.147 } 00:09:39.147 Got JSON-RPC error response 00:09:39.147 response: 00:09:39.147 { 00:09:39.147 "code": -17, 00:09:39.147 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:09:39.147 } 00:09:39.147 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:09:39.147 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:09:39.147 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:09:39.147 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:09:39.147 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:09:39.147 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.147 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.147 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:09:39.147 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.147 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.147 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:09:39.147 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:09:39.147 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:39.406 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.406 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.406 [2024-11-27 14:09:16.426042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:39.406 [2024-11-27 14:09:16.426121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.406 [2024-11-27 14:09:16.426153] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:09:39.406 [2024-11-27 14:09:16.426170] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.406 [2024-11-27 14:09:16.429110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.406 [2024-11-27 14:09:16.429161] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:39.406 [2024-11-27 14:09:16.429266] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:39.406 [2024-11-27 14:09:16.429338] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:39.406 pt1 00:09:39.406 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.406 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:09:39.406 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:39.406 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:39.406 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.406 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.406 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:39.406 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.406 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.406 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.406 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.406 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.406 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.406 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.406 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:39.406 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.406 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.406 "name": "raid_bdev1", 00:09:39.406 "uuid": "b6da41bf-f5ac-4146-b54a-b448203a7ab4", 00:09:39.406 "strip_size_kb": 0, 00:09:39.406 "state": "configuring", 00:09:39.406 "raid_level": "raid1", 00:09:39.406 "superblock": true, 00:09:39.406 "num_base_bdevs": 2, 00:09:39.406 "num_base_bdevs_discovered": 1, 00:09:39.406 "num_base_bdevs_operational": 2, 00:09:39.406 "base_bdevs_list": [ 00:09:39.406 { 00:09:39.406 "name": "pt1", 00:09:39.406 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:39.406 "is_configured": true, 00:09:39.406 "data_offset": 2048, 00:09:39.406 "data_size": 63488 00:09:39.406 }, 00:09:39.406 { 00:09:39.406 "name": null, 00:09:39.406 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:39.406 "is_configured": false, 00:09:39.406 "data_offset": 2048, 00:09:39.406 "data_size": 63488 00:09:39.406 } 00:09:39.406 ] 00:09:39.406 }' 00:09:39.406 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.406 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.664 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:09:39.664 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:09:39.664 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:39.664 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:39.664 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.664 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.664 [2024-11-27 14:09:16.926176] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:39.664 [2024-11-27 14:09:16.926266] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:39.664 [2024-11-27 14:09:16.926300] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:09:39.664 [2024-11-27 14:09:16.926317] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:39.664 [2024-11-27 14:09:16.926920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:39.664 [2024-11-27 14:09:16.926953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:39.664 [2024-11-27 14:09:16.927055] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:39.664 [2024-11-27 14:09:16.927096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:39.664 [2024-11-27 14:09:16.927243] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:39.664 [2024-11-27 14:09:16.927265] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:39.664 [2024-11-27 14:09:16.927576] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:39.664 [2024-11-27 14:09:16.927802] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:39.664 [2024-11-27 14:09:16.927819] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:39.664 [2024-11-27 14:09:16.927997] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:39.664 pt2 00:09:39.664 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.664 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:09:39.664 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:09:39.664 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:39.664 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:39.664 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:39.664 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:39.665 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:39.665 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:39.665 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:39.665 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:39.665 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:39.665 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:39.665 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:39.665 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:39.665 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:39.665 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:39.922 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:39.922 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:39.922 "name": "raid_bdev1", 00:09:39.922 "uuid": "b6da41bf-f5ac-4146-b54a-b448203a7ab4", 00:09:39.922 "strip_size_kb": 0, 00:09:39.922 "state": "online", 00:09:39.922 "raid_level": "raid1", 00:09:39.922 "superblock": true, 00:09:39.922 "num_base_bdevs": 2, 00:09:39.922 "num_base_bdevs_discovered": 2, 00:09:39.922 "num_base_bdevs_operational": 2, 00:09:39.922 "base_bdevs_list": [ 00:09:39.922 { 00:09:39.922 "name": "pt1", 00:09:39.922 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:39.922 "is_configured": true, 00:09:39.922 "data_offset": 2048, 00:09:39.922 "data_size": 63488 00:09:39.922 }, 00:09:39.922 { 00:09:39.922 "name": "pt2", 00:09:39.922 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:39.922 "is_configured": true, 00:09:39.922 "data_offset": 2048, 00:09:39.922 "data_size": 63488 00:09:39.922 } 00:09:39.922 ] 00:09:39.922 }' 00:09:39.922 14:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:39.922 14:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.180 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:09:40.180 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:09:40.180 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:40.180 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:40.180 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:40.180 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:40.180 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:40.180 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:40.180 14:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.180 14:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.180 [2024-11-27 14:09:17.438611] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:40.437 14:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.437 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:40.437 "name": "raid_bdev1", 00:09:40.437 "aliases": [ 00:09:40.438 "b6da41bf-f5ac-4146-b54a-b448203a7ab4" 00:09:40.438 ], 00:09:40.438 "product_name": "Raid Volume", 00:09:40.438 "block_size": 512, 00:09:40.438 "num_blocks": 63488, 00:09:40.438 "uuid": "b6da41bf-f5ac-4146-b54a-b448203a7ab4", 00:09:40.438 "assigned_rate_limits": { 00:09:40.438 "rw_ios_per_sec": 0, 00:09:40.438 "rw_mbytes_per_sec": 0, 00:09:40.438 "r_mbytes_per_sec": 0, 00:09:40.438 "w_mbytes_per_sec": 0 00:09:40.438 }, 00:09:40.438 "claimed": false, 00:09:40.438 "zoned": false, 00:09:40.438 "supported_io_types": { 00:09:40.438 "read": true, 00:09:40.438 "write": true, 00:09:40.438 "unmap": false, 00:09:40.438 "flush": false, 00:09:40.438 "reset": true, 00:09:40.438 "nvme_admin": false, 00:09:40.438 "nvme_io": false, 00:09:40.438 "nvme_io_md": false, 00:09:40.438 "write_zeroes": true, 00:09:40.438 "zcopy": false, 00:09:40.438 "get_zone_info": false, 00:09:40.438 "zone_management": false, 00:09:40.438 "zone_append": false, 00:09:40.438 "compare": false, 00:09:40.438 "compare_and_write": false, 00:09:40.438 "abort": false, 00:09:40.438 "seek_hole": false, 00:09:40.438 "seek_data": false, 00:09:40.438 "copy": false, 00:09:40.438 "nvme_iov_md": false 00:09:40.438 }, 00:09:40.438 "memory_domains": [ 00:09:40.438 { 00:09:40.438 "dma_device_id": "system", 00:09:40.438 "dma_device_type": 1 00:09:40.438 }, 00:09:40.438 { 00:09:40.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.438 "dma_device_type": 2 00:09:40.438 }, 00:09:40.438 { 00:09:40.438 "dma_device_id": "system", 00:09:40.438 "dma_device_type": 1 00:09:40.438 }, 00:09:40.438 { 00:09:40.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:40.438 "dma_device_type": 2 00:09:40.438 } 00:09:40.438 ], 00:09:40.438 "driver_specific": { 00:09:40.438 "raid": { 00:09:40.438 "uuid": "b6da41bf-f5ac-4146-b54a-b448203a7ab4", 00:09:40.438 "strip_size_kb": 0, 00:09:40.438 "state": "online", 00:09:40.438 "raid_level": "raid1", 00:09:40.438 "superblock": true, 00:09:40.438 "num_base_bdevs": 2, 00:09:40.438 "num_base_bdevs_discovered": 2, 00:09:40.438 "num_base_bdevs_operational": 2, 00:09:40.438 "base_bdevs_list": [ 00:09:40.438 { 00:09:40.438 "name": "pt1", 00:09:40.438 "uuid": "00000000-0000-0000-0000-000000000001", 00:09:40.438 "is_configured": true, 00:09:40.438 "data_offset": 2048, 00:09:40.438 "data_size": 63488 00:09:40.438 }, 00:09:40.438 { 00:09:40.438 "name": "pt2", 00:09:40.438 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:40.438 "is_configured": true, 00:09:40.438 "data_offset": 2048, 00:09:40.438 "data_size": 63488 00:09:40.438 } 00:09:40.438 ] 00:09:40.438 } 00:09:40.438 } 00:09:40.438 }' 00:09:40.438 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:40.438 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:09:40.438 pt2' 00:09:40.438 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.438 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:40.438 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.438 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.438 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:09:40.438 14:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.438 14:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.438 14:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.438 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.438 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.438 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:40.438 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:40.438 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:09:40.438 14:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.438 14:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.438 14:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.438 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:40.438 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:40.438 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:40.438 14:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.438 14:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.438 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:09:40.438 [2024-11-27 14:09:17.698659] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:40.695 14:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.695 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b6da41bf-f5ac-4146-b54a-b448203a7ab4 '!=' b6da41bf-f5ac-4146-b54a-b448203a7ab4 ']' 00:09:40.695 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:09:40.695 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:40.695 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:40.695 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:09:40.695 14:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.695 14:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.695 [2024-11-27 14:09:17.750454] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:09:40.695 14:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.695 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:40.695 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:40.695 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:40.695 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:40.695 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:40.695 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:40.695 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:40.695 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:40.695 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:40.695 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:40.695 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:40.695 14:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:40.695 14:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:40.696 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:40.696 14:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:40.696 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:40.696 "name": "raid_bdev1", 00:09:40.696 "uuid": "b6da41bf-f5ac-4146-b54a-b448203a7ab4", 00:09:40.696 "strip_size_kb": 0, 00:09:40.696 "state": "online", 00:09:40.696 "raid_level": "raid1", 00:09:40.696 "superblock": true, 00:09:40.696 "num_base_bdevs": 2, 00:09:40.696 "num_base_bdevs_discovered": 1, 00:09:40.696 "num_base_bdevs_operational": 1, 00:09:40.696 "base_bdevs_list": [ 00:09:40.696 { 00:09:40.696 "name": null, 00:09:40.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.696 "is_configured": false, 00:09:40.696 "data_offset": 0, 00:09:40.696 "data_size": 63488 00:09:40.696 }, 00:09:40.696 { 00:09:40.696 "name": "pt2", 00:09:40.696 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:40.696 "is_configured": true, 00:09:40.696 "data_offset": 2048, 00:09:40.696 "data_size": 63488 00:09:40.696 } 00:09:40.696 ] 00:09:40.696 }' 00:09:40.696 14:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:40.696 14:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.261 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:41.261 14:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.261 14:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.261 [2024-11-27 14:09:18.246514] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:41.261 [2024-11-27 14:09:18.246554] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:41.261 [2024-11-27 14:09:18.246675] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:41.261 [2024-11-27 14:09:18.246741] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:41.261 [2024-11-27 14:09:18.246761] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=1 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.262 [2024-11-27 14:09:18.322491] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:09:41.262 [2024-11-27 14:09:18.322565] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.262 [2024-11-27 14:09:18.322609] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:09:41.262 [2024-11-27 14:09:18.322637] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.262 [2024-11-27 14:09:18.325649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.262 [2024-11-27 14:09:18.325704] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:09:41.262 [2024-11-27 14:09:18.325835] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:09:41.262 [2024-11-27 14:09:18.325900] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:41.262 [2024-11-27 14:09:18.326038] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:09:41.262 [2024-11-27 14:09:18.326061] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:41.262 [2024-11-27 14:09:18.326354] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:09:41.262 [2024-11-27 14:09:18.326567] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:09:41.262 [2024-11-27 14:09:18.326584] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:09:41.262 [2024-11-27 14:09:18.326845] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.262 pt2 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.262 "name": "raid_bdev1", 00:09:41.262 "uuid": "b6da41bf-f5ac-4146-b54a-b448203a7ab4", 00:09:41.262 "strip_size_kb": 0, 00:09:41.262 "state": "online", 00:09:41.262 "raid_level": "raid1", 00:09:41.262 "superblock": true, 00:09:41.262 "num_base_bdevs": 2, 00:09:41.262 "num_base_bdevs_discovered": 1, 00:09:41.262 "num_base_bdevs_operational": 1, 00:09:41.262 "base_bdevs_list": [ 00:09:41.262 { 00:09:41.262 "name": null, 00:09:41.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.262 "is_configured": false, 00:09:41.262 "data_offset": 2048, 00:09:41.262 "data_size": 63488 00:09:41.262 }, 00:09:41.262 { 00:09:41.262 "name": "pt2", 00:09:41.262 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:41.262 "is_configured": true, 00:09:41.262 "data_offset": 2048, 00:09:41.262 "data_size": 63488 00:09:41.262 } 00:09:41.262 ] 00:09:41.262 }' 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.262 14:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.829 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:41.829 14:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.829 14:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.829 [2024-11-27 14:09:18.851055] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:41.829 [2024-11-27 14:09:18.851136] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:41.829 [2024-11-27 14:09:18.851271] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:41.829 [2024-11-27 14:09:18.851373] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:41.829 [2024-11-27 14:09:18.851394] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:09:41.829 14:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.829 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.829 14:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.829 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:09:41.829 14:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.829 14:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.829 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:09:41.829 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:09:41.829 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:09:41.829 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:09:41.829 14:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.829 14:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.829 [2024-11-27 14:09:18.915204] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:09:41.829 [2024-11-27 14:09:18.915392] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:41.829 [2024-11-27 14:09:18.915437] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:09:41.829 [2024-11-27 14:09:18.915456] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:41.829 [2024-11-27 14:09:18.919074] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:41.829 [2024-11-27 14:09:18.919141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:09:41.829 [2024-11-27 14:09:18.919295] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:09:41.829 [2024-11-27 14:09:18.919368] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:09:41.829 [2024-11-27 14:09:18.919665] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:09:41.829 [2024-11-27 14:09:18.919688] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:41.829 [2024-11-27 14:09:18.919717] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:09:41.829 pt1 00:09:41.829 [2024-11-27 14:09:18.919842] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:09:41.829 [2024-11-27 14:09:18.919974] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:09:41.829 [2024-11-27 14:09:18.919992] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:41.830 14:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.830 [2024-11-27 14:09:18.920396] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:09:41.830 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:09:41.830 [2024-11-27 14:09:18.920623] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:09:41.830 [2024-11-27 14:09:18.920650] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:09:41.830 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:41.830 [2024-11-27 14:09:18.920921] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.830 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:41.830 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:41.830 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:41.830 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:41.830 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:41.830 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:41.830 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:41.830 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:41.830 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:41.830 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:41.830 14:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:41.830 14:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:41.830 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:41.830 14:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:41.830 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:41.830 "name": "raid_bdev1", 00:09:41.830 "uuid": "b6da41bf-f5ac-4146-b54a-b448203a7ab4", 00:09:41.830 "strip_size_kb": 0, 00:09:41.830 "state": "online", 00:09:41.830 "raid_level": "raid1", 00:09:41.830 "superblock": true, 00:09:41.830 "num_base_bdevs": 2, 00:09:41.830 "num_base_bdevs_discovered": 1, 00:09:41.830 "num_base_bdevs_operational": 1, 00:09:41.830 "base_bdevs_list": [ 00:09:41.830 { 00:09:41.830 "name": null, 00:09:41.830 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.830 "is_configured": false, 00:09:41.830 "data_offset": 2048, 00:09:41.830 "data_size": 63488 00:09:41.830 }, 00:09:41.830 { 00:09:41.830 "name": "pt2", 00:09:41.830 "uuid": "00000000-0000-0000-0000-000000000002", 00:09:41.830 "is_configured": true, 00:09:41.830 "data_offset": 2048, 00:09:41.830 "data_size": 63488 00:09:41.830 } 00:09:41.830 ] 00:09:41.830 }' 00:09:41.830 14:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:41.830 14:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.396 14:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:09:42.396 14:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:09:42.396 14:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.396 14:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.396 14:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.396 14:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:09:42.396 14:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:09:42.396 14:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:09:42.396 14:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:42.396 14:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:42.396 [2024-11-27 14:09:19.523958] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:42.396 14:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:42.396 14:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' b6da41bf-f5ac-4146-b54a-b448203a7ab4 '!=' b6da41bf-f5ac-4146-b54a-b448203a7ab4 ']' 00:09:42.396 14:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 63071 00:09:42.396 14:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 63071 ']' 00:09:42.396 14:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 63071 00:09:42.396 14:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:09:42.396 14:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:42.396 14:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63071 00:09:42.396 14:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:42.396 killing process with pid 63071 00:09:42.396 14:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:42.396 14:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63071' 00:09:42.396 14:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 63071 00:09:42.396 14:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 63071 00:09:42.396 [2024-11-27 14:09:19.605793] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:42.396 [2024-11-27 14:09:19.606003] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:42.396 [2024-11-27 14:09:19.606101] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:42.396 [2024-11-27 14:09:19.606131] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:09:42.654 [2024-11-27 14:09:19.817155] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:44.036 14:09:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:09:44.036 00:09:44.036 real 0m6.724s 00:09:44.036 user 0m10.545s 00:09:44.036 sys 0m0.935s 00:09:44.036 14:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:44.036 14:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.036 ************************************ 00:09:44.036 END TEST raid_superblock_test 00:09:44.036 ************************************ 00:09:44.036 14:09:21 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:09:44.036 14:09:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:44.036 14:09:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:44.036 14:09:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:44.036 ************************************ 00:09:44.036 START TEST raid_read_error_test 00:09:44.036 ************************************ 00:09:44.036 14:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 read 00:09:44.036 14:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:44.036 14:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:44.036 14:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:09:44.036 14:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:44.036 14:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.036 14:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:44.036 14:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:44.036 14:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.036 14:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:44.036 14:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:44.036 14:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:44.036 14:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:44.036 14:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:44.036 14:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:44.036 14:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:44.036 14:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:44.036 14:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:44.036 14:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:44.036 14:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:44.036 14:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:44.036 14:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:44.036 14:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.LBHUVZDr3K 00:09:44.036 14:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63412 00:09:44.036 14:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63412 00:09:44.036 14:09:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:44.036 14:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 63412 ']' 00:09:44.036 14:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:44.036 14:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:44.036 14:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:44.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:44.036 14:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:44.036 14:09:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:44.036 [2024-11-27 14:09:21.149243] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:09:44.036 [2024-11-27 14:09:21.149463] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63412 ] 00:09:44.295 [2024-11-27 14:09:21.334261] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:44.295 [2024-11-27 14:09:21.487552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:44.558 [2024-11-27 14:09:21.719234] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:44.558 [2024-11-27 14:09:21.719359] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:45.126 14:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.126 14:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:45.126 14:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:45.126 14:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:45.126 14:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.126 14:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.126 BaseBdev1_malloc 00:09:45.126 14:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.126 14:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:45.126 14:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.126 14:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.126 true 00:09:45.126 14:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.126 14:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:45.126 14:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.126 14:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.126 [2024-11-27 14:09:22.165669] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:45.126 [2024-11-27 14:09:22.165885] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.126 [2024-11-27 14:09:22.165923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:45.126 [2024-11-27 14:09:22.165947] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.126 [2024-11-27 14:09:22.169223] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.126 [2024-11-27 14:09:22.169278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:45.126 BaseBdev1 00:09:45.126 14:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.126 14:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:45.126 14:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:45.126 14:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.126 14:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.126 BaseBdev2_malloc 00:09:45.126 14:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.126 14:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:45.126 14:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.126 14:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.126 true 00:09:45.126 14:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.126 14:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:45.126 14:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.126 14:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.126 [2024-11-27 14:09:22.242856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:45.126 [2024-11-27 14:09:22.242942] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:45.126 [2024-11-27 14:09:22.242975] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:45.126 [2024-11-27 14:09:22.242999] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:45.126 [2024-11-27 14:09:22.246809] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:45.126 [2024-11-27 14:09:22.246866] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:45.126 BaseBdev2 00:09:45.126 14:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.126 14:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:45.126 14:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.126 14:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.126 [2024-11-27 14:09:22.255184] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:45.126 [2024-11-27 14:09:22.258455] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:45.126 [2024-11-27 14:09:22.258853] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:45.126 [2024-11-27 14:09:22.258884] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:45.127 [2024-11-27 14:09:22.259303] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:45.127 [2024-11-27 14:09:22.259597] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:45.127 [2024-11-27 14:09:22.259618] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:45.127 [2024-11-27 14:09:22.259960] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:45.127 14:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.127 14:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:45.127 14:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:45.127 14:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:45.127 14:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:45.127 14:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:45.127 14:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:45.127 14:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:45.127 14:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:45.127 14:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:45.127 14:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:45.127 14:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:45.127 14:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:45.127 14:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.127 14:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.127 14:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.127 14:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:45.127 "name": "raid_bdev1", 00:09:45.127 "uuid": "39b3210c-a0bf-417f-afa0-a750a640f260", 00:09:45.127 "strip_size_kb": 0, 00:09:45.127 "state": "online", 00:09:45.127 "raid_level": "raid1", 00:09:45.127 "superblock": true, 00:09:45.127 "num_base_bdevs": 2, 00:09:45.127 "num_base_bdevs_discovered": 2, 00:09:45.127 "num_base_bdevs_operational": 2, 00:09:45.127 "base_bdevs_list": [ 00:09:45.127 { 00:09:45.127 "name": "BaseBdev1", 00:09:45.127 "uuid": "b3fb0b83-e685-50cb-b85b-97ebd0084e9d", 00:09:45.127 "is_configured": true, 00:09:45.127 "data_offset": 2048, 00:09:45.127 "data_size": 63488 00:09:45.127 }, 00:09:45.127 { 00:09:45.127 "name": "BaseBdev2", 00:09:45.127 "uuid": "73fa10db-ef79-53df-8dbf-74d229f8992b", 00:09:45.127 "is_configured": true, 00:09:45.127 "data_offset": 2048, 00:09:45.127 "data_size": 63488 00:09:45.127 } 00:09:45.127 ] 00:09:45.127 }' 00:09:45.127 14:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:45.127 14:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:45.693 14:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:45.693 14:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:45.693 [2024-11-27 14:09:22.901533] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:46.630 14:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:46.630 14:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.630 14:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.630 14:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.630 14:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:46.630 14:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:46.630 14:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:09:46.630 14:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=2 00:09:46.630 14:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:46.630 14:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:46.630 14:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:46.630 14:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:46.630 14:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:46.630 14:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:46.630 14:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:46.630 14:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:46.630 14:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:46.630 14:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:46.630 14:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:46.630 14:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:46.630 14:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:46.630 14:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:46.630 14:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:46.630 14:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:46.630 "name": "raid_bdev1", 00:09:46.630 "uuid": "39b3210c-a0bf-417f-afa0-a750a640f260", 00:09:46.630 "strip_size_kb": 0, 00:09:46.630 "state": "online", 00:09:46.630 "raid_level": "raid1", 00:09:46.630 "superblock": true, 00:09:46.630 "num_base_bdevs": 2, 00:09:46.630 "num_base_bdevs_discovered": 2, 00:09:46.630 "num_base_bdevs_operational": 2, 00:09:46.630 "base_bdevs_list": [ 00:09:46.630 { 00:09:46.630 "name": "BaseBdev1", 00:09:46.630 "uuid": "b3fb0b83-e685-50cb-b85b-97ebd0084e9d", 00:09:46.630 "is_configured": true, 00:09:46.630 "data_offset": 2048, 00:09:46.630 "data_size": 63488 00:09:46.630 }, 00:09:46.630 { 00:09:46.630 "name": "BaseBdev2", 00:09:46.630 "uuid": "73fa10db-ef79-53df-8dbf-74d229f8992b", 00:09:46.630 "is_configured": true, 00:09:46.630 "data_offset": 2048, 00:09:46.630 "data_size": 63488 00:09:46.630 } 00:09:46.630 ] 00:09:46.630 }' 00:09:46.630 14:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:46.631 14:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.197 14:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:47.197 14:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:47.198 14:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:47.198 [2024-11-27 14:09:24.309600] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:47.198 [2024-11-27 14:09:24.309648] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:47.198 [2024-11-27 14:09:24.313073] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:47.198 [2024-11-27 14:09:24.313151] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:47.198 [2024-11-27 14:09:24.313272] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:47.198 [2024-11-27 14:09:24.313301] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:47.198 { 00:09:47.198 "results": [ 00:09:47.198 { 00:09:47.198 "job": "raid_bdev1", 00:09:47.198 "core_mask": "0x1", 00:09:47.198 "workload": "randrw", 00:09:47.198 "percentage": 50, 00:09:47.198 "status": "finished", 00:09:47.198 "queue_depth": 1, 00:09:47.198 "io_size": 131072, 00:09:47.198 "runtime": 1.405641, 00:09:47.198 "iops": 11296.625525294154, 00:09:47.198 "mibps": 1412.0781906617692, 00:09:47.198 "io_failed": 0, 00:09:47.198 "io_timeout": 0, 00:09:47.198 "avg_latency_us": 83.9122811718164, 00:09:47.198 "min_latency_us": 38.86545454545455, 00:09:47.198 "max_latency_us": 2368.232727272727 00:09:47.198 } 00:09:47.198 ], 00:09:47.198 "core_count": 1 00:09:47.198 } 00:09:47.198 14:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:47.198 14:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63412 00:09:47.198 14:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 63412 ']' 00:09:47.198 14:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 63412 00:09:47.198 14:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:09:47.198 14:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:47.198 14:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63412 00:09:47.198 14:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:47.198 killing process with pid 63412 00:09:47.198 14:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:47.198 14:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63412' 00:09:47.198 14:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 63412 00:09:47.198 14:09:24 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 63412 00:09:47.198 [2024-11-27 14:09:24.348163] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:47.456 [2024-11-27 14:09:24.475222] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:48.393 14:09:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.LBHUVZDr3K 00:09:48.393 14:09:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:48.393 14:09:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:48.393 14:09:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:48.393 14:09:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:48.393 14:09:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:48.393 14:09:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:48.393 14:09:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:48.393 00:09:48.393 real 0m4.582s 00:09:48.393 user 0m5.637s 00:09:48.393 sys 0m0.631s 00:09:48.393 14:09:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:48.393 ************************************ 00:09:48.393 END TEST raid_read_error_test 00:09:48.393 ************************************ 00:09:48.393 14:09:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.393 14:09:25 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:09:48.393 14:09:25 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:48.393 14:09:25 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:48.393 14:09:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:48.393 ************************************ 00:09:48.393 START TEST raid_write_error_test 00:09:48.393 ************************************ 00:09:48.393 14:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 2 write 00:09:48.393 14:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:09:48.393 14:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=2 00:09:48.393 14:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:09:48.393 14:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:09:48.393 14:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.393 14:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:09:48.393 14:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.393 14:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.393 14:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:09:48.393 14:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:09:48.393 14:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:09:48.393 14:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:48.393 14:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:09:48.393 14:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:09:48.393 14:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:09:48.393 14:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:09:48.393 14:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:09:48.393 14:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:09:48.393 14:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:09:48.393 14:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:09:48.393 14:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:09:48.651 14:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.OSzDNGHzlo 00:09:48.651 14:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=63552 00:09:48.651 14:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 63552 00:09:48.651 14:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 63552 ']' 00:09:48.651 14:09:25 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:48.651 14:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:48.651 14:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:48.651 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:48.651 14:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:48.651 14:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:48.651 14:09:25 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:48.651 [2024-11-27 14:09:25.791196] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:09:48.651 [2024-11-27 14:09:25.791385] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63552 ] 00:09:48.909 [2024-11-27 14:09:25.979080] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.909 [2024-11-27 14:09:26.112150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:49.168 [2024-11-27 14:09:26.321219] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.168 [2024-11-27 14:09:26.321282] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:49.735 14:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:49.735 14:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:09:49.735 14:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.735 14:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:49.735 14:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.735 14:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.735 BaseBdev1_malloc 00:09:49.735 14:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.735 14:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:09:49.735 14:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.735 14:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.735 true 00:09:49.735 14:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.735 14:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:49.735 14:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.735 14:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.736 [2024-11-27 14:09:26.850981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:49.736 [2024-11-27 14:09:26.851052] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.736 [2024-11-27 14:09:26.851084] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:09:49.736 [2024-11-27 14:09:26.851102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.736 [2024-11-27 14:09:26.854201] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.736 [2024-11-27 14:09:26.854254] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:49.736 BaseBdev1 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.736 BaseBdev2_malloc 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.736 true 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.736 [2024-11-27 14:09:26.916178] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:49.736 [2024-11-27 14:09:26.916258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:49.736 [2024-11-27 14:09:26.916287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:09:49.736 [2024-11-27 14:09:26.916304] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:49.736 [2024-11-27 14:09:26.919169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:49.736 [2024-11-27 14:09:26.919346] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:49.736 BaseBdev2 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 -s 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.736 [2024-11-27 14:09:26.928376] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:49.736 [2024-11-27 14:09:26.931083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:49.736 [2024-11-27 14:09:26.931376] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:49.736 [2024-11-27 14:09:26.931402] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:49.736 [2024-11-27 14:09:26.931761] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:09:49.736 [2024-11-27 14:09:26.932029] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:49.736 [2024-11-27 14:09:26.932047] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:09:49.736 [2024-11-27 14:09:26.932296] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:49.736 "name": "raid_bdev1", 00:09:49.736 "uuid": "fbe488fc-5a0d-44ce-ac01-4be33c1a4f08", 00:09:49.736 "strip_size_kb": 0, 00:09:49.736 "state": "online", 00:09:49.736 "raid_level": "raid1", 00:09:49.736 "superblock": true, 00:09:49.736 "num_base_bdevs": 2, 00:09:49.736 "num_base_bdevs_discovered": 2, 00:09:49.736 "num_base_bdevs_operational": 2, 00:09:49.736 "base_bdevs_list": [ 00:09:49.736 { 00:09:49.736 "name": "BaseBdev1", 00:09:49.736 "uuid": "0c7940cc-38a9-58fe-9cd0-05363911bc51", 00:09:49.736 "is_configured": true, 00:09:49.736 "data_offset": 2048, 00:09:49.736 "data_size": 63488 00:09:49.736 }, 00:09:49.736 { 00:09:49.736 "name": "BaseBdev2", 00:09:49.736 "uuid": "4a10fee3-892c-5cd8-899d-ff195b2f5c96", 00:09:49.736 "is_configured": true, 00:09:49.736 "data_offset": 2048, 00:09:49.736 "data_size": 63488 00:09:49.736 } 00:09:49.736 ] 00:09:49.736 }' 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:49.736 14:09:26 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:50.303 14:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:09:50.303 14:09:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:09:50.562 [2024-11-27 14:09:27.614306] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:09:51.497 14:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:51.497 14:09:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.497 14:09:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.497 [2024-11-27 14:09:28.493266] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:51.497 [2024-11-27 14:09:28.493338] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:51.497 [2024-11-27 14:09:28.493574] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d0000063c0 00:09:51.497 14:09:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.497 14:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:09:51.497 14:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:51.497 14:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:09:51.497 14:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=1 00:09:51.497 14:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:51.497 14:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:09:51.497 14:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:51.497 14:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:09:51.497 14:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:09:51.497 14:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:09:51.497 14:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:51.497 14:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:51.497 14:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:51.497 14:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:51.497 14:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:51.497 14:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:51.497 14:09:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:51.497 14:09:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:51.497 14:09:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:51.497 14:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:51.497 "name": "raid_bdev1", 00:09:51.497 "uuid": "fbe488fc-5a0d-44ce-ac01-4be33c1a4f08", 00:09:51.497 "strip_size_kb": 0, 00:09:51.497 "state": "online", 00:09:51.497 "raid_level": "raid1", 00:09:51.497 "superblock": true, 00:09:51.497 "num_base_bdevs": 2, 00:09:51.497 "num_base_bdevs_discovered": 1, 00:09:51.497 "num_base_bdevs_operational": 1, 00:09:51.497 "base_bdevs_list": [ 00:09:51.497 { 00:09:51.497 "name": null, 00:09:51.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:51.497 "is_configured": false, 00:09:51.497 "data_offset": 0, 00:09:51.497 "data_size": 63488 00:09:51.497 }, 00:09:51.497 { 00:09:51.497 "name": "BaseBdev2", 00:09:51.497 "uuid": "4a10fee3-892c-5cd8-899d-ff195b2f5c96", 00:09:51.497 "is_configured": true, 00:09:51.497 "data_offset": 2048, 00:09:51.497 "data_size": 63488 00:09:51.497 } 00:09:51.497 ] 00:09:51.497 }' 00:09:51.497 14:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:51.497 14:09:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.064 14:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:09:52.064 14:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:52.064 14:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:52.064 [2024-11-27 14:09:29.059411] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:52.064 [2024-11-27 14:09:29.059624] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:52.064 [2024-11-27 14:09:29.063286] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:52.064 [2024-11-27 14:09:29.063500] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:52.064 [2024-11-27 14:09:29.063845] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:52.064 [2024-11-27 14:09:29.064017] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:09:52.064 { 00:09:52.064 "results": [ 00:09:52.064 { 00:09:52.064 "job": "raid_bdev1", 00:09:52.064 "core_mask": "0x1", 00:09:52.064 "workload": "randrw", 00:09:52.064 "percentage": 50, 00:09:52.064 "status": "finished", 00:09:52.064 "queue_depth": 1, 00:09:52.064 "io_size": 131072, 00:09:52.064 "runtime": 1.442279, 00:09:52.064 "iops": 13230.44986441597, 00:09:52.064 "mibps": 1653.8062330519963, 00:09:52.064 "io_failed": 0, 00:09:52.064 "io_timeout": 0, 00:09:52.064 "avg_latency_us": 70.97870815904565, 00:09:52.064 "min_latency_us": 39.09818181818182, 00:09:52.064 "max_latency_us": 1802.24 00:09:52.064 } 00:09:52.064 ], 00:09:52.064 "core_count": 1 00:09:52.064 } 00:09:52.064 14:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:52.064 14:09:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 63552 00:09:52.064 14:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 63552 ']' 00:09:52.064 14:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 63552 00:09:52.064 14:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:09:52.064 14:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:52.064 14:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63552 00:09:52.065 14:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:52.065 14:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:52.065 14:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63552' 00:09:52.065 killing process with pid 63552 00:09:52.065 14:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 63552 00:09:52.065 [2024-11-27 14:09:29.106698] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:52.065 14:09:29 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 63552 00:09:52.065 [2024-11-27 14:09:29.221535] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:53.441 14:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.OSzDNGHzlo 00:09:53.441 14:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:09:53.441 14:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:09:53.441 ************************************ 00:09:53.441 END TEST raid_write_error_test 00:09:53.441 ************************************ 00:09:53.441 14:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:09:53.441 14:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:09:53.441 14:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:53.441 14:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:09:53.441 14:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:53.441 00:09:53.441 real 0m4.635s 00:09:53.441 user 0m5.887s 00:09:53.441 sys 0m0.571s 00:09:53.441 14:09:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:53.441 14:09:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.441 14:09:30 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:09:53.441 14:09:30 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:09:53.441 14:09:30 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:09:53.441 14:09:30 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:09:53.441 14:09:30 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:53.441 14:09:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:53.441 ************************************ 00:09:53.441 START TEST raid_state_function_test 00:09:53.441 ************************************ 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 false 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:09:53.441 Process raid pid: 63696 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=63696 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 63696' 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 63696 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 63696 ']' 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:53.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:53.441 14:09:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:53.441 [2024-11-27 14:09:30.465586] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:09:53.441 [2024-11-27 14:09:30.465947] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:53.441 [2024-11-27 14:09:30.653837] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.701 [2024-11-27 14:09:30.785031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.961 [2024-11-27 14:09:30.999690] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:53.961 [2024-11-27 14:09:30.999976] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:54.220 14:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:54.220 14:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:09:54.220 14:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:54.220 14:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.220 14:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.220 [2024-11-27 14:09:31.475310] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:54.220 [2024-11-27 14:09:31.475406] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:54.220 [2024-11-27 14:09:31.475423] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:54.220 [2024-11-27 14:09:31.475440] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:54.221 [2024-11-27 14:09:31.475450] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:54.221 [2024-11-27 14:09:31.475464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:54.221 14:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.221 14:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:54.221 14:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.221 14:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.221 14:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.221 14:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.221 14:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.221 14:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.221 14:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.221 14:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.221 14:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.221 14:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.221 14:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.221 14:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.221 14:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.480 14:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.480 14:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.480 "name": "Existed_Raid", 00:09:54.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.480 "strip_size_kb": 64, 00:09:54.480 "state": "configuring", 00:09:54.480 "raid_level": "raid0", 00:09:54.480 "superblock": false, 00:09:54.480 "num_base_bdevs": 3, 00:09:54.480 "num_base_bdevs_discovered": 0, 00:09:54.480 "num_base_bdevs_operational": 3, 00:09:54.480 "base_bdevs_list": [ 00:09:54.480 { 00:09:54.480 "name": "BaseBdev1", 00:09:54.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.480 "is_configured": false, 00:09:54.480 "data_offset": 0, 00:09:54.480 "data_size": 0 00:09:54.480 }, 00:09:54.480 { 00:09:54.480 "name": "BaseBdev2", 00:09:54.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.480 "is_configured": false, 00:09:54.480 "data_offset": 0, 00:09:54.480 "data_size": 0 00:09:54.480 }, 00:09:54.480 { 00:09:54.480 "name": "BaseBdev3", 00:09:54.480 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.480 "is_configured": false, 00:09:54.480 "data_offset": 0, 00:09:54.480 "data_size": 0 00:09:54.480 } 00:09:54.480 ] 00:09:54.480 }' 00:09:54.480 14:09:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.480 14:09:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.739 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:54.739 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.739 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.739 [2024-11-27 14:09:32.011430] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:54.739 [2024-11-27 14:09:32.011477] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:09:54.998 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.998 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:54.998 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.998 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.998 [2024-11-27 14:09:32.019401] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:54.998 [2024-11-27 14:09:32.019642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:54.998 [2024-11-27 14:09:32.019669] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:54.998 [2024-11-27 14:09:32.019688] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:54.998 [2024-11-27 14:09:32.019698] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:54.998 [2024-11-27 14:09:32.019713] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:54.998 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.998 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:54.998 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.998 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.998 [2024-11-27 14:09:32.062224] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:54.998 BaseBdev1 00:09:54.998 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.998 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:09:54.998 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:54.998 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:54.998 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:54.998 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:54.998 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:54.998 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:54.998 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.998 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.998 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.998 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:54.998 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.998 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.998 [ 00:09:54.998 { 00:09:54.998 "name": "BaseBdev1", 00:09:54.998 "aliases": [ 00:09:54.998 "03e55e71-2cd0-416e-abf9-ebda3c1540f4" 00:09:54.998 ], 00:09:54.998 "product_name": "Malloc disk", 00:09:54.998 "block_size": 512, 00:09:54.998 "num_blocks": 65536, 00:09:54.998 "uuid": "03e55e71-2cd0-416e-abf9-ebda3c1540f4", 00:09:54.998 "assigned_rate_limits": { 00:09:54.998 "rw_ios_per_sec": 0, 00:09:54.998 "rw_mbytes_per_sec": 0, 00:09:54.998 "r_mbytes_per_sec": 0, 00:09:54.998 "w_mbytes_per_sec": 0 00:09:54.998 }, 00:09:54.998 "claimed": true, 00:09:54.998 "claim_type": "exclusive_write", 00:09:54.998 "zoned": false, 00:09:54.998 "supported_io_types": { 00:09:54.998 "read": true, 00:09:54.998 "write": true, 00:09:54.998 "unmap": true, 00:09:54.998 "flush": true, 00:09:54.998 "reset": true, 00:09:54.998 "nvme_admin": false, 00:09:54.998 "nvme_io": false, 00:09:54.998 "nvme_io_md": false, 00:09:54.998 "write_zeroes": true, 00:09:54.998 "zcopy": true, 00:09:54.998 "get_zone_info": false, 00:09:54.998 "zone_management": false, 00:09:54.998 "zone_append": false, 00:09:54.998 "compare": false, 00:09:54.998 "compare_and_write": false, 00:09:54.998 "abort": true, 00:09:54.998 "seek_hole": false, 00:09:54.998 "seek_data": false, 00:09:54.998 "copy": true, 00:09:54.998 "nvme_iov_md": false 00:09:54.998 }, 00:09:54.998 "memory_domains": [ 00:09:54.998 { 00:09:54.998 "dma_device_id": "system", 00:09:54.998 "dma_device_type": 1 00:09:54.998 }, 00:09:54.998 { 00:09:54.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:54.998 "dma_device_type": 2 00:09:54.998 } 00:09:54.998 ], 00:09:54.998 "driver_specific": {} 00:09:54.998 } 00:09:54.998 ] 00:09:54.998 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.998 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:54.998 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:54.998 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:54.998 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:54.998 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:54.998 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:54.998 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:54.998 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:54.999 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:54.999 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:54.999 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:54.999 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:54.999 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:54.999 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:54.999 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.999 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:54.999 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:54.999 "name": "Existed_Raid", 00:09:54.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.999 "strip_size_kb": 64, 00:09:54.999 "state": "configuring", 00:09:54.999 "raid_level": "raid0", 00:09:54.999 "superblock": false, 00:09:54.999 "num_base_bdevs": 3, 00:09:54.999 "num_base_bdevs_discovered": 1, 00:09:54.999 "num_base_bdevs_operational": 3, 00:09:54.999 "base_bdevs_list": [ 00:09:54.999 { 00:09:54.999 "name": "BaseBdev1", 00:09:54.999 "uuid": "03e55e71-2cd0-416e-abf9-ebda3c1540f4", 00:09:54.999 "is_configured": true, 00:09:54.999 "data_offset": 0, 00:09:54.999 "data_size": 65536 00:09:54.999 }, 00:09:54.999 { 00:09:54.999 "name": "BaseBdev2", 00:09:54.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.999 "is_configured": false, 00:09:54.999 "data_offset": 0, 00:09:54.999 "data_size": 0 00:09:54.999 }, 00:09:54.999 { 00:09:54.999 "name": "BaseBdev3", 00:09:54.999 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:54.999 "is_configured": false, 00:09:54.999 "data_offset": 0, 00:09:54.999 "data_size": 0 00:09:54.999 } 00:09:54.999 ] 00:09:54.999 }' 00:09:54.999 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:54.999 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.569 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:09:55.569 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.569 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.569 [2024-11-27 14:09:32.618495] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:55.569 [2024-11-27 14:09:32.618557] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:09:55.569 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.569 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:55.569 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.569 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.569 [2024-11-27 14:09:32.630569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:55.569 [2024-11-27 14:09:32.633109] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:55.570 [2024-11-27 14:09:32.633196] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:55.570 [2024-11-27 14:09:32.633214] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:55.570 [2024-11-27 14:09:32.633230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:55.570 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.570 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:09:55.570 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:55.570 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:55.570 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:55.570 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:55.570 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:55.570 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:55.570 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:55.570 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:55.570 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:55.570 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:55.570 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:55.570 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:55.570 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.570 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.570 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:55.570 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.570 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:55.570 "name": "Existed_Raid", 00:09:55.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.570 "strip_size_kb": 64, 00:09:55.570 "state": "configuring", 00:09:55.570 "raid_level": "raid0", 00:09:55.570 "superblock": false, 00:09:55.570 "num_base_bdevs": 3, 00:09:55.570 "num_base_bdevs_discovered": 1, 00:09:55.570 "num_base_bdevs_operational": 3, 00:09:55.570 "base_bdevs_list": [ 00:09:55.570 { 00:09:55.570 "name": "BaseBdev1", 00:09:55.570 "uuid": "03e55e71-2cd0-416e-abf9-ebda3c1540f4", 00:09:55.570 "is_configured": true, 00:09:55.570 "data_offset": 0, 00:09:55.570 "data_size": 65536 00:09:55.570 }, 00:09:55.570 { 00:09:55.570 "name": "BaseBdev2", 00:09:55.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.570 "is_configured": false, 00:09:55.570 "data_offset": 0, 00:09:55.570 "data_size": 0 00:09:55.570 }, 00:09:55.570 { 00:09:55.570 "name": "BaseBdev3", 00:09:55.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:55.570 "is_configured": false, 00:09:55.570 "data_offset": 0, 00:09:55.570 "data_size": 0 00:09:55.570 } 00:09:55.570 ] 00:09:55.570 }' 00:09:55.570 14:09:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:55.570 14:09:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.142 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:56.142 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.142 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.142 [2024-11-27 14:09:33.204127] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:56.142 BaseBdev2 00:09:56.142 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.142 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:09:56.142 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:56.142 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:56.142 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:56.142 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:56.142 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:56.142 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:56.142 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.142 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.142 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.142 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:56.142 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.142 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.142 [ 00:09:56.142 { 00:09:56.142 "name": "BaseBdev2", 00:09:56.142 "aliases": [ 00:09:56.142 "00017e32-b86e-432d-bd04-aef6a46a9ee5" 00:09:56.142 ], 00:09:56.142 "product_name": "Malloc disk", 00:09:56.142 "block_size": 512, 00:09:56.142 "num_blocks": 65536, 00:09:56.142 "uuid": "00017e32-b86e-432d-bd04-aef6a46a9ee5", 00:09:56.142 "assigned_rate_limits": { 00:09:56.142 "rw_ios_per_sec": 0, 00:09:56.142 "rw_mbytes_per_sec": 0, 00:09:56.142 "r_mbytes_per_sec": 0, 00:09:56.142 "w_mbytes_per_sec": 0 00:09:56.142 }, 00:09:56.142 "claimed": true, 00:09:56.142 "claim_type": "exclusive_write", 00:09:56.142 "zoned": false, 00:09:56.142 "supported_io_types": { 00:09:56.142 "read": true, 00:09:56.142 "write": true, 00:09:56.142 "unmap": true, 00:09:56.142 "flush": true, 00:09:56.142 "reset": true, 00:09:56.142 "nvme_admin": false, 00:09:56.142 "nvme_io": false, 00:09:56.142 "nvme_io_md": false, 00:09:56.142 "write_zeroes": true, 00:09:56.142 "zcopy": true, 00:09:56.142 "get_zone_info": false, 00:09:56.142 "zone_management": false, 00:09:56.142 "zone_append": false, 00:09:56.142 "compare": false, 00:09:56.142 "compare_and_write": false, 00:09:56.142 "abort": true, 00:09:56.142 "seek_hole": false, 00:09:56.142 "seek_data": false, 00:09:56.142 "copy": true, 00:09:56.142 "nvme_iov_md": false 00:09:56.142 }, 00:09:56.142 "memory_domains": [ 00:09:56.142 { 00:09:56.142 "dma_device_id": "system", 00:09:56.142 "dma_device_type": 1 00:09:56.142 }, 00:09:56.142 { 00:09:56.142 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.142 "dma_device_type": 2 00:09:56.142 } 00:09:56.142 ], 00:09:56.142 "driver_specific": {} 00:09:56.142 } 00:09:56.142 ] 00:09:56.142 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.142 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:56.142 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:56.142 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:56.142 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:56.142 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.142 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:56.142 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:56.142 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.143 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.143 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.143 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.143 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.143 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.143 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.143 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.143 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.143 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.143 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.143 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.143 "name": "Existed_Raid", 00:09:56.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.143 "strip_size_kb": 64, 00:09:56.143 "state": "configuring", 00:09:56.143 "raid_level": "raid0", 00:09:56.143 "superblock": false, 00:09:56.143 "num_base_bdevs": 3, 00:09:56.143 "num_base_bdevs_discovered": 2, 00:09:56.143 "num_base_bdevs_operational": 3, 00:09:56.143 "base_bdevs_list": [ 00:09:56.143 { 00:09:56.143 "name": "BaseBdev1", 00:09:56.143 "uuid": "03e55e71-2cd0-416e-abf9-ebda3c1540f4", 00:09:56.143 "is_configured": true, 00:09:56.143 "data_offset": 0, 00:09:56.143 "data_size": 65536 00:09:56.143 }, 00:09:56.143 { 00:09:56.143 "name": "BaseBdev2", 00:09:56.143 "uuid": "00017e32-b86e-432d-bd04-aef6a46a9ee5", 00:09:56.143 "is_configured": true, 00:09:56.143 "data_offset": 0, 00:09:56.143 "data_size": 65536 00:09:56.143 }, 00:09:56.143 { 00:09:56.143 "name": "BaseBdev3", 00:09:56.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:56.143 "is_configured": false, 00:09:56.143 "data_offset": 0, 00:09:56.143 "data_size": 0 00:09:56.143 } 00:09:56.143 ] 00:09:56.143 }' 00:09:56.143 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.143 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.712 [2024-11-27 14:09:33.812812] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:56.712 [2024-11-27 14:09:33.812951] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:09:56.712 [2024-11-27 14:09:33.812990] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:56.712 [2024-11-27 14:09:33.813395] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:09:56.712 [2024-11-27 14:09:33.813634] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:09:56.712 [2024-11-27 14:09:33.813658] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:09:56.712 [2024-11-27 14:09:33.814032] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.712 BaseBdev3 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.712 [ 00:09:56.712 { 00:09:56.712 "name": "BaseBdev3", 00:09:56.712 "aliases": [ 00:09:56.712 "c8d5b05e-0369-44ac-89ab-691afde1ea4a" 00:09:56.712 ], 00:09:56.712 "product_name": "Malloc disk", 00:09:56.712 "block_size": 512, 00:09:56.712 "num_blocks": 65536, 00:09:56.712 "uuid": "c8d5b05e-0369-44ac-89ab-691afde1ea4a", 00:09:56.712 "assigned_rate_limits": { 00:09:56.712 "rw_ios_per_sec": 0, 00:09:56.712 "rw_mbytes_per_sec": 0, 00:09:56.712 "r_mbytes_per_sec": 0, 00:09:56.712 "w_mbytes_per_sec": 0 00:09:56.712 }, 00:09:56.712 "claimed": true, 00:09:56.712 "claim_type": "exclusive_write", 00:09:56.712 "zoned": false, 00:09:56.712 "supported_io_types": { 00:09:56.712 "read": true, 00:09:56.712 "write": true, 00:09:56.712 "unmap": true, 00:09:56.712 "flush": true, 00:09:56.712 "reset": true, 00:09:56.712 "nvme_admin": false, 00:09:56.712 "nvme_io": false, 00:09:56.712 "nvme_io_md": false, 00:09:56.712 "write_zeroes": true, 00:09:56.712 "zcopy": true, 00:09:56.712 "get_zone_info": false, 00:09:56.712 "zone_management": false, 00:09:56.712 "zone_append": false, 00:09:56.712 "compare": false, 00:09:56.712 "compare_and_write": false, 00:09:56.712 "abort": true, 00:09:56.712 "seek_hole": false, 00:09:56.712 "seek_data": false, 00:09:56.712 "copy": true, 00:09:56.712 "nvme_iov_md": false 00:09:56.712 }, 00:09:56.712 "memory_domains": [ 00:09:56.712 { 00:09:56.712 "dma_device_id": "system", 00:09:56.712 "dma_device_type": 1 00:09:56.712 }, 00:09:56.712 { 00:09:56.712 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.712 "dma_device_type": 2 00:09:56.712 } 00:09:56.712 ], 00:09:56.712 "driver_specific": {} 00:09:56.712 } 00:09:56.712 ] 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:56.712 "name": "Existed_Raid", 00:09:56.712 "uuid": "321b24c0-5ca6-42c5-8044-3dfe29301324", 00:09:56.712 "strip_size_kb": 64, 00:09:56.712 "state": "online", 00:09:56.712 "raid_level": "raid0", 00:09:56.712 "superblock": false, 00:09:56.712 "num_base_bdevs": 3, 00:09:56.712 "num_base_bdevs_discovered": 3, 00:09:56.712 "num_base_bdevs_operational": 3, 00:09:56.712 "base_bdevs_list": [ 00:09:56.712 { 00:09:56.712 "name": "BaseBdev1", 00:09:56.712 "uuid": "03e55e71-2cd0-416e-abf9-ebda3c1540f4", 00:09:56.712 "is_configured": true, 00:09:56.712 "data_offset": 0, 00:09:56.712 "data_size": 65536 00:09:56.712 }, 00:09:56.712 { 00:09:56.712 "name": "BaseBdev2", 00:09:56.712 "uuid": "00017e32-b86e-432d-bd04-aef6a46a9ee5", 00:09:56.712 "is_configured": true, 00:09:56.712 "data_offset": 0, 00:09:56.712 "data_size": 65536 00:09:56.712 }, 00:09:56.712 { 00:09:56.712 "name": "BaseBdev3", 00:09:56.712 "uuid": "c8d5b05e-0369-44ac-89ab-691afde1ea4a", 00:09:56.712 "is_configured": true, 00:09:56.712 "data_offset": 0, 00:09:56.712 "data_size": 65536 00:09:56.712 } 00:09:56.712 ] 00:09:56.712 }' 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:56.712 14:09:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.281 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:09:57.281 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:09:57.281 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:09:57.281 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:09:57.281 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:09:57.281 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:09:57.281 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:09:57.281 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:09:57.281 14:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.281 14:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.281 [2024-11-27 14:09:34.381457] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:57.281 14:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.281 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:09:57.281 "name": "Existed_Raid", 00:09:57.281 "aliases": [ 00:09:57.281 "321b24c0-5ca6-42c5-8044-3dfe29301324" 00:09:57.281 ], 00:09:57.281 "product_name": "Raid Volume", 00:09:57.281 "block_size": 512, 00:09:57.281 "num_blocks": 196608, 00:09:57.281 "uuid": "321b24c0-5ca6-42c5-8044-3dfe29301324", 00:09:57.281 "assigned_rate_limits": { 00:09:57.281 "rw_ios_per_sec": 0, 00:09:57.281 "rw_mbytes_per_sec": 0, 00:09:57.281 "r_mbytes_per_sec": 0, 00:09:57.281 "w_mbytes_per_sec": 0 00:09:57.281 }, 00:09:57.281 "claimed": false, 00:09:57.281 "zoned": false, 00:09:57.281 "supported_io_types": { 00:09:57.281 "read": true, 00:09:57.281 "write": true, 00:09:57.281 "unmap": true, 00:09:57.281 "flush": true, 00:09:57.281 "reset": true, 00:09:57.281 "nvme_admin": false, 00:09:57.281 "nvme_io": false, 00:09:57.281 "nvme_io_md": false, 00:09:57.281 "write_zeroes": true, 00:09:57.281 "zcopy": false, 00:09:57.281 "get_zone_info": false, 00:09:57.281 "zone_management": false, 00:09:57.281 "zone_append": false, 00:09:57.281 "compare": false, 00:09:57.281 "compare_and_write": false, 00:09:57.281 "abort": false, 00:09:57.281 "seek_hole": false, 00:09:57.281 "seek_data": false, 00:09:57.281 "copy": false, 00:09:57.281 "nvme_iov_md": false 00:09:57.281 }, 00:09:57.281 "memory_domains": [ 00:09:57.281 { 00:09:57.281 "dma_device_id": "system", 00:09:57.281 "dma_device_type": 1 00:09:57.281 }, 00:09:57.281 { 00:09:57.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.281 "dma_device_type": 2 00:09:57.281 }, 00:09:57.281 { 00:09:57.281 "dma_device_id": "system", 00:09:57.281 "dma_device_type": 1 00:09:57.281 }, 00:09:57.281 { 00:09:57.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.281 "dma_device_type": 2 00:09:57.281 }, 00:09:57.281 { 00:09:57.281 "dma_device_id": "system", 00:09:57.281 "dma_device_type": 1 00:09:57.281 }, 00:09:57.281 { 00:09:57.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.281 "dma_device_type": 2 00:09:57.281 } 00:09:57.281 ], 00:09:57.281 "driver_specific": { 00:09:57.281 "raid": { 00:09:57.281 "uuid": "321b24c0-5ca6-42c5-8044-3dfe29301324", 00:09:57.281 "strip_size_kb": 64, 00:09:57.281 "state": "online", 00:09:57.281 "raid_level": "raid0", 00:09:57.281 "superblock": false, 00:09:57.281 "num_base_bdevs": 3, 00:09:57.281 "num_base_bdevs_discovered": 3, 00:09:57.281 "num_base_bdevs_operational": 3, 00:09:57.281 "base_bdevs_list": [ 00:09:57.281 { 00:09:57.281 "name": "BaseBdev1", 00:09:57.281 "uuid": "03e55e71-2cd0-416e-abf9-ebda3c1540f4", 00:09:57.281 "is_configured": true, 00:09:57.281 "data_offset": 0, 00:09:57.281 "data_size": 65536 00:09:57.281 }, 00:09:57.281 { 00:09:57.281 "name": "BaseBdev2", 00:09:57.281 "uuid": "00017e32-b86e-432d-bd04-aef6a46a9ee5", 00:09:57.281 "is_configured": true, 00:09:57.281 "data_offset": 0, 00:09:57.281 "data_size": 65536 00:09:57.281 }, 00:09:57.281 { 00:09:57.281 "name": "BaseBdev3", 00:09:57.282 "uuid": "c8d5b05e-0369-44ac-89ab-691afde1ea4a", 00:09:57.282 "is_configured": true, 00:09:57.282 "data_offset": 0, 00:09:57.282 "data_size": 65536 00:09:57.282 } 00:09:57.282 ] 00:09:57.282 } 00:09:57.282 } 00:09:57.282 }' 00:09:57.282 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:57.282 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:09:57.282 BaseBdev2 00:09:57.282 BaseBdev3' 00:09:57.282 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.282 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:09:57.282 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.282 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.282 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:09:57.282 14:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.282 14:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.541 14:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.541 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.541 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.541 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.541 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.541 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:09:57.541 14:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.541 14:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.541 14:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.541 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.541 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.541 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:09:57.541 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:09:57.541 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:09:57.541 14:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.541 14:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.541 14:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.541 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:09:57.541 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:09:57.541 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:09:57.541 14:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.541 14:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.541 [2024-11-27 14:09:34.697240] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:57.541 [2024-11-27 14:09:34.697292] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:57.541 [2024-11-27 14:09:34.697365] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:57.541 14:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.542 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:09:57.542 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:09:57.542 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:09:57.542 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:09:57.542 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:09:57.542 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:57.542 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:57.542 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:09:57.542 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:57.542 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:57.542 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:09:57.542 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:57.542 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:57.542 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:57.542 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:57.542 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:57.542 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.542 14:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:57.542 14:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:57.542 14:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:57.800 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:57.800 "name": "Existed_Raid", 00:09:57.800 "uuid": "321b24c0-5ca6-42c5-8044-3dfe29301324", 00:09:57.800 "strip_size_kb": 64, 00:09:57.800 "state": "offline", 00:09:57.800 "raid_level": "raid0", 00:09:57.800 "superblock": false, 00:09:57.800 "num_base_bdevs": 3, 00:09:57.800 "num_base_bdevs_discovered": 2, 00:09:57.800 "num_base_bdevs_operational": 2, 00:09:57.800 "base_bdevs_list": [ 00:09:57.800 { 00:09:57.800 "name": null, 00:09:57.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:57.800 "is_configured": false, 00:09:57.800 "data_offset": 0, 00:09:57.800 "data_size": 65536 00:09:57.801 }, 00:09:57.801 { 00:09:57.801 "name": "BaseBdev2", 00:09:57.801 "uuid": "00017e32-b86e-432d-bd04-aef6a46a9ee5", 00:09:57.801 "is_configured": true, 00:09:57.801 "data_offset": 0, 00:09:57.801 "data_size": 65536 00:09:57.801 }, 00:09:57.801 { 00:09:57.801 "name": "BaseBdev3", 00:09:57.801 "uuid": "c8d5b05e-0369-44ac-89ab-691afde1ea4a", 00:09:57.801 "is_configured": true, 00:09:57.801 "data_offset": 0, 00:09:57.801 "data_size": 65536 00:09:57.801 } 00:09:57.801 ] 00:09:57.801 }' 00:09:57.801 14:09:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:57.801 14:09:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.060 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:09:58.060 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:58.060 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.060 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:58.060 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.060 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.320 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.320 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:58.320 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:58.320 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:09:58.320 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.320 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.320 [2024-11-27 14:09:35.392760] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:58.320 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.320 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:58.320 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:58.320 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.320 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:09:58.320 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.320 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.320 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.320 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:09:58.320 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:58.320 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:09:58.320 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.320 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.320 [2024-11-27 14:09:35.539415] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:58.320 [2024-11-27 14:09:35.539651] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.580 BaseBdev2 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.580 [ 00:09:58.580 { 00:09:58.580 "name": "BaseBdev2", 00:09:58.580 "aliases": [ 00:09:58.580 "866c7d54-5f00-4d45-8746-c6683a8a7b2d" 00:09:58.580 ], 00:09:58.580 "product_name": "Malloc disk", 00:09:58.580 "block_size": 512, 00:09:58.580 "num_blocks": 65536, 00:09:58.580 "uuid": "866c7d54-5f00-4d45-8746-c6683a8a7b2d", 00:09:58.580 "assigned_rate_limits": { 00:09:58.580 "rw_ios_per_sec": 0, 00:09:58.580 "rw_mbytes_per_sec": 0, 00:09:58.580 "r_mbytes_per_sec": 0, 00:09:58.580 "w_mbytes_per_sec": 0 00:09:58.580 }, 00:09:58.580 "claimed": false, 00:09:58.580 "zoned": false, 00:09:58.580 "supported_io_types": { 00:09:58.580 "read": true, 00:09:58.580 "write": true, 00:09:58.580 "unmap": true, 00:09:58.580 "flush": true, 00:09:58.580 "reset": true, 00:09:58.580 "nvme_admin": false, 00:09:58.580 "nvme_io": false, 00:09:58.580 "nvme_io_md": false, 00:09:58.580 "write_zeroes": true, 00:09:58.580 "zcopy": true, 00:09:58.580 "get_zone_info": false, 00:09:58.580 "zone_management": false, 00:09:58.580 "zone_append": false, 00:09:58.580 "compare": false, 00:09:58.580 "compare_and_write": false, 00:09:58.580 "abort": true, 00:09:58.580 "seek_hole": false, 00:09:58.580 "seek_data": false, 00:09:58.580 "copy": true, 00:09:58.580 "nvme_iov_md": false 00:09:58.580 }, 00:09:58.580 "memory_domains": [ 00:09:58.580 { 00:09:58.580 "dma_device_id": "system", 00:09:58.580 "dma_device_type": 1 00:09:58.580 }, 00:09:58.580 { 00:09:58.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.580 "dma_device_type": 2 00:09:58.580 } 00:09:58.580 ], 00:09:58.580 "driver_specific": {} 00:09:58.580 } 00:09:58.580 ] 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.580 BaseBdev3 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:58.580 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.581 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.581 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.581 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:58.581 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.581 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.581 [ 00:09:58.581 { 00:09:58.581 "name": "BaseBdev3", 00:09:58.581 "aliases": [ 00:09:58.581 "d9a10c8f-a95e-4f5a-b31b-feea41d2ef15" 00:09:58.581 ], 00:09:58.581 "product_name": "Malloc disk", 00:09:58.581 "block_size": 512, 00:09:58.581 "num_blocks": 65536, 00:09:58.581 "uuid": "d9a10c8f-a95e-4f5a-b31b-feea41d2ef15", 00:09:58.581 "assigned_rate_limits": { 00:09:58.581 "rw_ios_per_sec": 0, 00:09:58.581 "rw_mbytes_per_sec": 0, 00:09:58.581 "r_mbytes_per_sec": 0, 00:09:58.581 "w_mbytes_per_sec": 0 00:09:58.581 }, 00:09:58.581 "claimed": false, 00:09:58.581 "zoned": false, 00:09:58.581 "supported_io_types": { 00:09:58.581 "read": true, 00:09:58.581 "write": true, 00:09:58.581 "unmap": true, 00:09:58.581 "flush": true, 00:09:58.581 "reset": true, 00:09:58.581 "nvme_admin": false, 00:09:58.581 "nvme_io": false, 00:09:58.581 "nvme_io_md": false, 00:09:58.581 "write_zeroes": true, 00:09:58.581 "zcopy": true, 00:09:58.581 "get_zone_info": false, 00:09:58.581 "zone_management": false, 00:09:58.581 "zone_append": false, 00:09:58.581 "compare": false, 00:09:58.581 "compare_and_write": false, 00:09:58.581 "abort": true, 00:09:58.581 "seek_hole": false, 00:09:58.581 "seek_data": false, 00:09:58.581 "copy": true, 00:09:58.581 "nvme_iov_md": false 00:09:58.581 }, 00:09:58.581 "memory_domains": [ 00:09:58.581 { 00:09:58.581 "dma_device_id": "system", 00:09:58.581 "dma_device_type": 1 00:09:58.581 }, 00:09:58.581 { 00:09:58.581 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.581 "dma_device_type": 2 00:09:58.581 } 00:09:58.581 ], 00:09:58.581 "driver_specific": {} 00:09:58.581 } 00:09:58.581 ] 00:09:58.581 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.581 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:58.581 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:09:58.581 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:09:58.581 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:09:58.581 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.581 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.581 [2024-11-27 14:09:35.846389] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:58.581 [2024-11-27 14:09:35.846464] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:58.581 [2024-11-27 14:09:35.846518] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:58.581 [2024-11-27 14:09:35.849000] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:58.581 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.581 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:58.581 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:58.581 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:58.581 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:58.581 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:58.581 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:58.840 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:58.840 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:58.840 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:58.841 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:58.841 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:58.841 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.841 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:58.841 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:58.841 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.841 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:58.841 "name": "Existed_Raid", 00:09:58.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.841 "strip_size_kb": 64, 00:09:58.841 "state": "configuring", 00:09:58.841 "raid_level": "raid0", 00:09:58.841 "superblock": false, 00:09:58.841 "num_base_bdevs": 3, 00:09:58.841 "num_base_bdevs_discovered": 2, 00:09:58.841 "num_base_bdevs_operational": 3, 00:09:58.841 "base_bdevs_list": [ 00:09:58.841 { 00:09:58.841 "name": "BaseBdev1", 00:09:58.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:58.841 "is_configured": false, 00:09:58.841 "data_offset": 0, 00:09:58.841 "data_size": 0 00:09:58.841 }, 00:09:58.841 { 00:09:58.841 "name": "BaseBdev2", 00:09:58.841 "uuid": "866c7d54-5f00-4d45-8746-c6683a8a7b2d", 00:09:58.841 "is_configured": true, 00:09:58.841 "data_offset": 0, 00:09:58.841 "data_size": 65536 00:09:58.841 }, 00:09:58.841 { 00:09:58.841 "name": "BaseBdev3", 00:09:58.841 "uuid": "d9a10c8f-a95e-4f5a-b31b-feea41d2ef15", 00:09:58.841 "is_configured": true, 00:09:58.841 "data_offset": 0, 00:09:58.841 "data_size": 65536 00:09:58.841 } 00:09:58.841 ] 00:09:58.841 }' 00:09:58.841 14:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:58.841 14:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.409 14:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:09:59.409 14:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.409 14:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.409 [2024-11-27 14:09:36.419460] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:59.409 14:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.409 14:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:59.409 14:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.409 14:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.409 14:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.409 14:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.409 14:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.409 14:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.409 14:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.409 14:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.409 14:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.409 14:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.409 14:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.409 14:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.409 14:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.409 14:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.409 14:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.409 "name": "Existed_Raid", 00:09:59.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.409 "strip_size_kb": 64, 00:09:59.409 "state": "configuring", 00:09:59.409 "raid_level": "raid0", 00:09:59.409 "superblock": false, 00:09:59.409 "num_base_bdevs": 3, 00:09:59.409 "num_base_bdevs_discovered": 1, 00:09:59.409 "num_base_bdevs_operational": 3, 00:09:59.409 "base_bdevs_list": [ 00:09:59.409 { 00:09:59.409 "name": "BaseBdev1", 00:09:59.409 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.409 "is_configured": false, 00:09:59.409 "data_offset": 0, 00:09:59.409 "data_size": 0 00:09:59.409 }, 00:09:59.409 { 00:09:59.409 "name": null, 00:09:59.409 "uuid": "866c7d54-5f00-4d45-8746-c6683a8a7b2d", 00:09:59.409 "is_configured": false, 00:09:59.409 "data_offset": 0, 00:09:59.409 "data_size": 65536 00:09:59.409 }, 00:09:59.409 { 00:09:59.409 "name": "BaseBdev3", 00:09:59.409 "uuid": "d9a10c8f-a95e-4f5a-b31b-feea41d2ef15", 00:09:59.409 "is_configured": true, 00:09:59.409 "data_offset": 0, 00:09:59.409 "data_size": 65536 00:09:59.409 } 00:09:59.409 ] 00:09:59.409 }' 00:09:59.409 14:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.409 14:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.977 14:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.977 14:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:59.977 14:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.977 14:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.977 14:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.977 [2024-11-27 14:09:37.055251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:59.977 BaseBdev1 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.977 [ 00:09:59.977 { 00:09:59.977 "name": "BaseBdev1", 00:09:59.977 "aliases": [ 00:09:59.977 "1804bd8b-e658-4944-bd11-e85bda2eebf6" 00:09:59.977 ], 00:09:59.977 "product_name": "Malloc disk", 00:09:59.977 "block_size": 512, 00:09:59.977 "num_blocks": 65536, 00:09:59.977 "uuid": "1804bd8b-e658-4944-bd11-e85bda2eebf6", 00:09:59.977 "assigned_rate_limits": { 00:09:59.977 "rw_ios_per_sec": 0, 00:09:59.977 "rw_mbytes_per_sec": 0, 00:09:59.977 "r_mbytes_per_sec": 0, 00:09:59.977 "w_mbytes_per_sec": 0 00:09:59.977 }, 00:09:59.977 "claimed": true, 00:09:59.977 "claim_type": "exclusive_write", 00:09:59.977 "zoned": false, 00:09:59.977 "supported_io_types": { 00:09:59.977 "read": true, 00:09:59.977 "write": true, 00:09:59.977 "unmap": true, 00:09:59.977 "flush": true, 00:09:59.977 "reset": true, 00:09:59.977 "nvme_admin": false, 00:09:59.977 "nvme_io": false, 00:09:59.977 "nvme_io_md": false, 00:09:59.977 "write_zeroes": true, 00:09:59.977 "zcopy": true, 00:09:59.977 "get_zone_info": false, 00:09:59.977 "zone_management": false, 00:09:59.977 "zone_append": false, 00:09:59.977 "compare": false, 00:09:59.977 "compare_and_write": false, 00:09:59.977 "abort": true, 00:09:59.977 "seek_hole": false, 00:09:59.977 "seek_data": false, 00:09:59.977 "copy": true, 00:09:59.977 "nvme_iov_md": false 00:09:59.977 }, 00:09:59.977 "memory_domains": [ 00:09:59.977 { 00:09:59.977 "dma_device_id": "system", 00:09:59.977 "dma_device_type": 1 00:09:59.977 }, 00:09:59.977 { 00:09:59.977 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.977 "dma_device_type": 2 00:09:59.977 } 00:09:59.977 ], 00:09:59.977 "driver_specific": {} 00:09:59.977 } 00:09:59.977 ] 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:59.977 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:09:59.977 "name": "Existed_Raid", 00:09:59.977 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:59.977 "strip_size_kb": 64, 00:09:59.977 "state": "configuring", 00:09:59.977 "raid_level": "raid0", 00:09:59.977 "superblock": false, 00:09:59.977 "num_base_bdevs": 3, 00:09:59.978 "num_base_bdevs_discovered": 2, 00:09:59.978 "num_base_bdevs_operational": 3, 00:09:59.978 "base_bdevs_list": [ 00:09:59.978 { 00:09:59.978 "name": "BaseBdev1", 00:09:59.978 "uuid": "1804bd8b-e658-4944-bd11-e85bda2eebf6", 00:09:59.978 "is_configured": true, 00:09:59.978 "data_offset": 0, 00:09:59.978 "data_size": 65536 00:09:59.978 }, 00:09:59.978 { 00:09:59.978 "name": null, 00:09:59.978 "uuid": "866c7d54-5f00-4d45-8746-c6683a8a7b2d", 00:09:59.978 "is_configured": false, 00:09:59.978 "data_offset": 0, 00:09:59.978 "data_size": 65536 00:09:59.978 }, 00:09:59.978 { 00:09:59.978 "name": "BaseBdev3", 00:09:59.978 "uuid": "d9a10c8f-a95e-4f5a-b31b-feea41d2ef15", 00:09:59.978 "is_configured": true, 00:09:59.978 "data_offset": 0, 00:09:59.978 "data_size": 65536 00:09:59.978 } 00:09:59.978 ] 00:09:59.978 }' 00:09:59.978 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:09:59.978 14:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.545 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.545 14:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.545 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:00.545 14:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.545 14:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.545 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:00.545 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:00.545 14:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.545 14:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.545 [2024-11-27 14:09:37.699517] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:00.545 14:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.545 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:00.545 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:00.545 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:00.545 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:00.545 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:00.545 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:00.545 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:00.545 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:00.545 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:00.545 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:00.545 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:00.545 14:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:00.545 14:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:00.545 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:00.545 14:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:00.545 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:00.545 "name": "Existed_Raid", 00:10:00.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:00.545 "strip_size_kb": 64, 00:10:00.545 "state": "configuring", 00:10:00.545 "raid_level": "raid0", 00:10:00.545 "superblock": false, 00:10:00.545 "num_base_bdevs": 3, 00:10:00.545 "num_base_bdevs_discovered": 1, 00:10:00.545 "num_base_bdevs_operational": 3, 00:10:00.545 "base_bdevs_list": [ 00:10:00.545 { 00:10:00.545 "name": "BaseBdev1", 00:10:00.545 "uuid": "1804bd8b-e658-4944-bd11-e85bda2eebf6", 00:10:00.545 "is_configured": true, 00:10:00.545 "data_offset": 0, 00:10:00.545 "data_size": 65536 00:10:00.545 }, 00:10:00.545 { 00:10:00.545 "name": null, 00:10:00.545 "uuid": "866c7d54-5f00-4d45-8746-c6683a8a7b2d", 00:10:00.545 "is_configured": false, 00:10:00.545 "data_offset": 0, 00:10:00.545 "data_size": 65536 00:10:00.545 }, 00:10:00.545 { 00:10:00.545 "name": null, 00:10:00.545 "uuid": "d9a10c8f-a95e-4f5a-b31b-feea41d2ef15", 00:10:00.545 "is_configured": false, 00:10:00.545 "data_offset": 0, 00:10:00.545 "data_size": 65536 00:10:00.545 } 00:10:00.545 ] 00:10:00.545 }' 00:10:00.545 14:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:00.545 14:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.112 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.113 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:01.113 14:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.113 14:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.113 14:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.113 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:01.113 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:01.113 14:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.113 14:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.113 [2024-11-27 14:09:38.279694] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:01.113 14:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.113 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:01.113 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.113 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.113 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:01.113 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.113 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.113 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.113 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.113 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.113 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.113 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.113 14:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.113 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.113 14:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.113 14:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.113 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.113 "name": "Existed_Raid", 00:10:01.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.113 "strip_size_kb": 64, 00:10:01.113 "state": "configuring", 00:10:01.113 "raid_level": "raid0", 00:10:01.113 "superblock": false, 00:10:01.113 "num_base_bdevs": 3, 00:10:01.113 "num_base_bdevs_discovered": 2, 00:10:01.113 "num_base_bdevs_operational": 3, 00:10:01.113 "base_bdevs_list": [ 00:10:01.113 { 00:10:01.113 "name": "BaseBdev1", 00:10:01.113 "uuid": "1804bd8b-e658-4944-bd11-e85bda2eebf6", 00:10:01.113 "is_configured": true, 00:10:01.113 "data_offset": 0, 00:10:01.113 "data_size": 65536 00:10:01.113 }, 00:10:01.113 { 00:10:01.113 "name": null, 00:10:01.113 "uuid": "866c7d54-5f00-4d45-8746-c6683a8a7b2d", 00:10:01.113 "is_configured": false, 00:10:01.113 "data_offset": 0, 00:10:01.113 "data_size": 65536 00:10:01.113 }, 00:10:01.113 { 00:10:01.113 "name": "BaseBdev3", 00:10:01.113 "uuid": "d9a10c8f-a95e-4f5a-b31b-feea41d2ef15", 00:10:01.113 "is_configured": true, 00:10:01.113 "data_offset": 0, 00:10:01.113 "data_size": 65536 00:10:01.113 } 00:10:01.113 ] 00:10:01.113 }' 00:10:01.113 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.113 14:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.679 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:01.679 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.679 14:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.679 14:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.679 14:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.679 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:01.679 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:01.679 14:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.679 14:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.679 [2024-11-27 14:09:38.879900] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:01.939 14:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.939 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:01.939 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:01.939 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:01.939 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:01.939 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:01.939 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:01.939 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:01.939 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:01.939 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:01.939 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:01.939 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:01.939 14:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:01.939 14:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.939 14:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:01.939 14:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.939 14:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:01.939 "name": "Existed_Raid", 00:10:01.939 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:01.939 "strip_size_kb": 64, 00:10:01.939 "state": "configuring", 00:10:01.939 "raid_level": "raid0", 00:10:01.939 "superblock": false, 00:10:01.939 "num_base_bdevs": 3, 00:10:01.939 "num_base_bdevs_discovered": 1, 00:10:01.939 "num_base_bdevs_operational": 3, 00:10:01.939 "base_bdevs_list": [ 00:10:01.939 { 00:10:01.939 "name": null, 00:10:01.939 "uuid": "1804bd8b-e658-4944-bd11-e85bda2eebf6", 00:10:01.939 "is_configured": false, 00:10:01.939 "data_offset": 0, 00:10:01.939 "data_size": 65536 00:10:01.939 }, 00:10:01.939 { 00:10:01.939 "name": null, 00:10:01.939 "uuid": "866c7d54-5f00-4d45-8746-c6683a8a7b2d", 00:10:01.939 "is_configured": false, 00:10:01.939 "data_offset": 0, 00:10:01.939 "data_size": 65536 00:10:01.939 }, 00:10:01.939 { 00:10:01.939 "name": "BaseBdev3", 00:10:01.939 "uuid": "d9a10c8f-a95e-4f5a-b31b-feea41d2ef15", 00:10:01.939 "is_configured": true, 00:10:01.939 "data_offset": 0, 00:10:01.939 "data_size": 65536 00:10:01.939 } 00:10:01.939 ] 00:10:01.939 }' 00:10:01.939 14:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:01.939 14:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.508 14:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.508 14:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:02.508 14:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.508 14:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.508 14:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.508 14:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:02.508 14:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:02.508 14:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.508 14:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.508 [2024-11-27 14:09:39.542956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:02.508 14:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.508 14:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:02.508 14:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:02.508 14:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:02.508 14:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:02.508 14:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:02.508 14:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:02.508 14:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:02.508 14:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:02.508 14:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:02.508 14:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:02.508 14:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:02.508 14:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:02.508 14:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:02.508 14:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.508 14:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:02.508 14:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:02.508 "name": "Existed_Raid", 00:10:02.508 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:02.508 "strip_size_kb": 64, 00:10:02.508 "state": "configuring", 00:10:02.508 "raid_level": "raid0", 00:10:02.508 "superblock": false, 00:10:02.508 "num_base_bdevs": 3, 00:10:02.508 "num_base_bdevs_discovered": 2, 00:10:02.508 "num_base_bdevs_operational": 3, 00:10:02.508 "base_bdevs_list": [ 00:10:02.508 { 00:10:02.508 "name": null, 00:10:02.508 "uuid": "1804bd8b-e658-4944-bd11-e85bda2eebf6", 00:10:02.508 "is_configured": false, 00:10:02.508 "data_offset": 0, 00:10:02.508 "data_size": 65536 00:10:02.508 }, 00:10:02.508 { 00:10:02.508 "name": "BaseBdev2", 00:10:02.508 "uuid": "866c7d54-5f00-4d45-8746-c6683a8a7b2d", 00:10:02.508 "is_configured": true, 00:10:02.508 "data_offset": 0, 00:10:02.508 "data_size": 65536 00:10:02.508 }, 00:10:02.508 { 00:10:02.508 "name": "BaseBdev3", 00:10:02.508 "uuid": "d9a10c8f-a95e-4f5a-b31b-feea41d2ef15", 00:10:02.508 "is_configured": true, 00:10:02.508 "data_offset": 0, 00:10:02.508 "data_size": 65536 00:10:02.508 } 00:10:02.508 ] 00:10:02.508 }' 00:10:02.508 14:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:02.508 14:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.077 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.077 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:03.077 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.077 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.077 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.077 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:03.077 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.077 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.077 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:03.077 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.077 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.077 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1804bd8b-e658-4944-bd11-e85bda2eebf6 00:10:03.077 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.077 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.077 [2024-11-27 14:09:40.217138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:03.077 [2024-11-27 14:09:40.217201] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:03.077 [2024-11-27 14:09:40.217217] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:03.077 [2024-11-27 14:09:40.217519] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:03.077 [2024-11-27 14:09:40.217710] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:03.077 [2024-11-27 14:09:40.217727] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:03.077 [2024-11-27 14:09:40.218072] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:03.077 NewBaseBdev 00:10:03.077 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.077 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:03.077 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:03.077 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:03.077 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:03.077 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:03.077 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:03.077 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:03.077 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.077 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.077 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.077 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:03.077 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.077 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.077 [ 00:10:03.077 { 00:10:03.077 "name": "NewBaseBdev", 00:10:03.077 "aliases": [ 00:10:03.077 "1804bd8b-e658-4944-bd11-e85bda2eebf6" 00:10:03.077 ], 00:10:03.077 "product_name": "Malloc disk", 00:10:03.077 "block_size": 512, 00:10:03.077 "num_blocks": 65536, 00:10:03.077 "uuid": "1804bd8b-e658-4944-bd11-e85bda2eebf6", 00:10:03.077 "assigned_rate_limits": { 00:10:03.077 "rw_ios_per_sec": 0, 00:10:03.077 "rw_mbytes_per_sec": 0, 00:10:03.077 "r_mbytes_per_sec": 0, 00:10:03.077 "w_mbytes_per_sec": 0 00:10:03.077 }, 00:10:03.077 "claimed": true, 00:10:03.077 "claim_type": "exclusive_write", 00:10:03.077 "zoned": false, 00:10:03.077 "supported_io_types": { 00:10:03.077 "read": true, 00:10:03.077 "write": true, 00:10:03.077 "unmap": true, 00:10:03.077 "flush": true, 00:10:03.077 "reset": true, 00:10:03.077 "nvme_admin": false, 00:10:03.077 "nvme_io": false, 00:10:03.077 "nvme_io_md": false, 00:10:03.077 "write_zeroes": true, 00:10:03.077 "zcopy": true, 00:10:03.077 "get_zone_info": false, 00:10:03.077 "zone_management": false, 00:10:03.077 "zone_append": false, 00:10:03.077 "compare": false, 00:10:03.077 "compare_and_write": false, 00:10:03.077 "abort": true, 00:10:03.077 "seek_hole": false, 00:10:03.077 "seek_data": false, 00:10:03.077 "copy": true, 00:10:03.077 "nvme_iov_md": false 00:10:03.077 }, 00:10:03.078 "memory_domains": [ 00:10:03.078 { 00:10:03.078 "dma_device_id": "system", 00:10:03.078 "dma_device_type": 1 00:10:03.078 }, 00:10:03.078 { 00:10:03.078 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.078 "dma_device_type": 2 00:10:03.078 } 00:10:03.078 ], 00:10:03.078 "driver_specific": {} 00:10:03.078 } 00:10:03.078 ] 00:10:03.078 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.078 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:03.078 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:03.078 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:03.078 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:03.078 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:03.078 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:03.078 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:03.078 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:03.078 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:03.078 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:03.078 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:03.078 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:03.078 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:03.078 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.078 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.078 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.078 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:03.078 "name": "Existed_Raid", 00:10:03.078 "uuid": "eb9b23fc-c9c8-4bac-a5a0-62d967e08710", 00:10:03.078 "strip_size_kb": 64, 00:10:03.078 "state": "online", 00:10:03.078 "raid_level": "raid0", 00:10:03.078 "superblock": false, 00:10:03.078 "num_base_bdevs": 3, 00:10:03.078 "num_base_bdevs_discovered": 3, 00:10:03.078 "num_base_bdevs_operational": 3, 00:10:03.078 "base_bdevs_list": [ 00:10:03.078 { 00:10:03.078 "name": "NewBaseBdev", 00:10:03.078 "uuid": "1804bd8b-e658-4944-bd11-e85bda2eebf6", 00:10:03.078 "is_configured": true, 00:10:03.078 "data_offset": 0, 00:10:03.078 "data_size": 65536 00:10:03.078 }, 00:10:03.078 { 00:10:03.078 "name": "BaseBdev2", 00:10:03.078 "uuid": "866c7d54-5f00-4d45-8746-c6683a8a7b2d", 00:10:03.078 "is_configured": true, 00:10:03.078 "data_offset": 0, 00:10:03.078 "data_size": 65536 00:10:03.078 }, 00:10:03.078 { 00:10:03.078 "name": "BaseBdev3", 00:10:03.078 "uuid": "d9a10c8f-a95e-4f5a-b31b-feea41d2ef15", 00:10:03.078 "is_configured": true, 00:10:03.078 "data_offset": 0, 00:10:03.078 "data_size": 65536 00:10:03.078 } 00:10:03.078 ] 00:10:03.078 }' 00:10:03.078 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:03.078 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.645 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:03.645 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:03.645 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:03.645 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:03.645 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:03.645 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:03.645 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:03.645 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.645 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:03.645 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.645 [2024-11-27 14:09:40.789732] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:03.645 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.645 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:03.645 "name": "Existed_Raid", 00:10:03.645 "aliases": [ 00:10:03.645 "eb9b23fc-c9c8-4bac-a5a0-62d967e08710" 00:10:03.645 ], 00:10:03.645 "product_name": "Raid Volume", 00:10:03.645 "block_size": 512, 00:10:03.645 "num_blocks": 196608, 00:10:03.645 "uuid": "eb9b23fc-c9c8-4bac-a5a0-62d967e08710", 00:10:03.645 "assigned_rate_limits": { 00:10:03.645 "rw_ios_per_sec": 0, 00:10:03.645 "rw_mbytes_per_sec": 0, 00:10:03.645 "r_mbytes_per_sec": 0, 00:10:03.645 "w_mbytes_per_sec": 0 00:10:03.645 }, 00:10:03.645 "claimed": false, 00:10:03.645 "zoned": false, 00:10:03.645 "supported_io_types": { 00:10:03.645 "read": true, 00:10:03.645 "write": true, 00:10:03.645 "unmap": true, 00:10:03.645 "flush": true, 00:10:03.645 "reset": true, 00:10:03.645 "nvme_admin": false, 00:10:03.645 "nvme_io": false, 00:10:03.645 "nvme_io_md": false, 00:10:03.645 "write_zeroes": true, 00:10:03.645 "zcopy": false, 00:10:03.645 "get_zone_info": false, 00:10:03.645 "zone_management": false, 00:10:03.645 "zone_append": false, 00:10:03.645 "compare": false, 00:10:03.645 "compare_and_write": false, 00:10:03.645 "abort": false, 00:10:03.645 "seek_hole": false, 00:10:03.645 "seek_data": false, 00:10:03.645 "copy": false, 00:10:03.645 "nvme_iov_md": false 00:10:03.645 }, 00:10:03.645 "memory_domains": [ 00:10:03.645 { 00:10:03.645 "dma_device_id": "system", 00:10:03.645 "dma_device_type": 1 00:10:03.645 }, 00:10:03.645 { 00:10:03.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.645 "dma_device_type": 2 00:10:03.645 }, 00:10:03.645 { 00:10:03.646 "dma_device_id": "system", 00:10:03.646 "dma_device_type": 1 00:10:03.646 }, 00:10:03.646 { 00:10:03.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.646 "dma_device_type": 2 00:10:03.646 }, 00:10:03.646 { 00:10:03.646 "dma_device_id": "system", 00:10:03.646 "dma_device_type": 1 00:10:03.646 }, 00:10:03.646 { 00:10:03.646 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.646 "dma_device_type": 2 00:10:03.646 } 00:10:03.646 ], 00:10:03.646 "driver_specific": { 00:10:03.646 "raid": { 00:10:03.646 "uuid": "eb9b23fc-c9c8-4bac-a5a0-62d967e08710", 00:10:03.646 "strip_size_kb": 64, 00:10:03.646 "state": "online", 00:10:03.646 "raid_level": "raid0", 00:10:03.646 "superblock": false, 00:10:03.646 "num_base_bdevs": 3, 00:10:03.646 "num_base_bdevs_discovered": 3, 00:10:03.646 "num_base_bdevs_operational": 3, 00:10:03.646 "base_bdevs_list": [ 00:10:03.646 { 00:10:03.646 "name": "NewBaseBdev", 00:10:03.646 "uuid": "1804bd8b-e658-4944-bd11-e85bda2eebf6", 00:10:03.646 "is_configured": true, 00:10:03.646 "data_offset": 0, 00:10:03.646 "data_size": 65536 00:10:03.646 }, 00:10:03.646 { 00:10:03.646 "name": "BaseBdev2", 00:10:03.646 "uuid": "866c7d54-5f00-4d45-8746-c6683a8a7b2d", 00:10:03.646 "is_configured": true, 00:10:03.646 "data_offset": 0, 00:10:03.646 "data_size": 65536 00:10:03.646 }, 00:10:03.646 { 00:10:03.646 "name": "BaseBdev3", 00:10:03.646 "uuid": "d9a10c8f-a95e-4f5a-b31b-feea41d2ef15", 00:10:03.646 "is_configured": true, 00:10:03.646 "data_offset": 0, 00:10:03.646 "data_size": 65536 00:10:03.646 } 00:10:03.646 ] 00:10:03.646 } 00:10:03.646 } 00:10:03.646 }' 00:10:03.646 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:03.646 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:03.646 BaseBdev2 00:10:03.646 BaseBdev3' 00:10:03.646 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.904 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:03.904 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.904 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.904 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:03.904 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.904 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.904 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.904 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.904 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.904 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.904 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:03.904 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.904 14:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.904 14:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.904 14:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.904 14:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.904 14:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.904 14:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:03.904 14:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:03.904 14:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:03.904 14:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.904 14:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.904 14:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.904 14:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:03.904 14:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:03.904 14:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:03.904 14:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:03.904 14:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:03.904 [2024-11-27 14:09:41.085460] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:03.904 [2024-11-27 14:09:41.086544] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:03.904 [2024-11-27 14:09:41.086674] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:03.904 [2024-11-27 14:09:41.086750] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:03.904 [2024-11-27 14:09:41.086785] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:03.904 14:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:03.904 14:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 63696 00:10:03.904 14:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 63696 ']' 00:10:03.904 14:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 63696 00:10:03.904 14:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:03.904 14:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:03.904 14:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63696 00:10:03.904 killing process with pid 63696 00:10:03.904 14:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:03.904 14:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:03.904 14:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63696' 00:10:03.904 14:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 63696 00:10:03.905 [2024-11-27 14:09:41.119274] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:03.905 14:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 63696 00:10:04.163 [2024-11-27 14:09:41.386975] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:05.540 00:10:05.540 real 0m12.092s 00:10:05.540 user 0m20.124s 00:10:05.540 sys 0m1.632s 00:10:05.540 ************************************ 00:10:05.540 END TEST raid_state_function_test 00:10:05.540 ************************************ 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:05.540 14:09:42 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:10:05.540 14:09:42 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:05.540 14:09:42 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:05.540 14:09:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:05.540 ************************************ 00:10:05.540 START TEST raid_state_function_test_sb 00:10:05.540 ************************************ 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 3 true 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:05.540 Process raid pid: 64339 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=64339 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 64339' 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 64339 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 64339 ']' 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:05.540 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:05.540 14:09:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:05.540 [2024-11-27 14:09:42.598894] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:10:05.540 [2024-11-27 14:09:42.599409] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:05.540 [2024-11-27 14:09:42.774547] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:05.828 [2024-11-27 14:09:42.907531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:06.086 [2024-11-27 14:09:43.113237] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:06.086 [2024-11-27 14:09:43.113496] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:06.344 14:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:06.344 14:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:06.344 14:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:06.344 14:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.344 14:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.344 [2024-11-27 14:09:43.580225] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:06.344 [2024-11-27 14:09:43.580304] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:06.344 [2024-11-27 14:09:43.580338] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:06.344 [2024-11-27 14:09:43.580355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:06.344 [2024-11-27 14:09:43.580365] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:06.344 [2024-11-27 14:09:43.580395] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:06.344 14:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.344 14:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:06.344 14:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:06.344 14:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:06.344 14:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:06.344 14:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:06.344 14:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:06.344 14:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:06.344 14:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:06.344 14:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:06.344 14:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:06.344 14:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:06.344 14:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.344 14:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.344 14:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:06.344 14:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.603 14:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:06.603 "name": "Existed_Raid", 00:10:06.603 "uuid": "86633616-130c-4abc-9b6f-5f5cef03cb51", 00:10:06.603 "strip_size_kb": 64, 00:10:06.603 "state": "configuring", 00:10:06.603 "raid_level": "raid0", 00:10:06.603 "superblock": true, 00:10:06.603 "num_base_bdevs": 3, 00:10:06.603 "num_base_bdevs_discovered": 0, 00:10:06.603 "num_base_bdevs_operational": 3, 00:10:06.603 "base_bdevs_list": [ 00:10:06.603 { 00:10:06.603 "name": "BaseBdev1", 00:10:06.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.603 "is_configured": false, 00:10:06.603 "data_offset": 0, 00:10:06.603 "data_size": 0 00:10:06.603 }, 00:10:06.603 { 00:10:06.603 "name": "BaseBdev2", 00:10:06.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.603 "is_configured": false, 00:10:06.603 "data_offset": 0, 00:10:06.603 "data_size": 0 00:10:06.603 }, 00:10:06.603 { 00:10:06.603 "name": "BaseBdev3", 00:10:06.603 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:06.603 "is_configured": false, 00:10:06.603 "data_offset": 0, 00:10:06.603 "data_size": 0 00:10:06.603 } 00:10:06.603 ] 00:10:06.603 }' 00:10:06.603 14:09:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:06.603 14:09:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.862 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:06.862 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.862 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.862 [2024-11-27 14:09:44.068260] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:06.862 [2024-11-27 14:09:44.068475] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:06.862 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.862 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:06.862 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.862 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.862 [2024-11-27 14:09:44.076264] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:06.862 [2024-11-27 14:09:44.076329] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:06.863 [2024-11-27 14:09:44.076345] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:06.863 [2024-11-27 14:09:44.076359] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:06.863 [2024-11-27 14:09:44.076368] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:06.863 [2024-11-27 14:09:44.076381] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:06.863 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.863 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:06.863 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.863 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.863 [2024-11-27 14:09:44.118231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:06.863 BaseBdev1 00:10:06.863 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.863 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:06.863 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:06.863 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:06.863 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:06.863 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:06.863 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:06.863 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:06.863 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.863 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:06.863 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:06.863 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:06.863 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:06.863 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.122 [ 00:10:07.122 { 00:10:07.122 "name": "BaseBdev1", 00:10:07.122 "aliases": [ 00:10:07.122 "773de7d4-acd3-4173-b0c2-c3c65f079e5b" 00:10:07.122 ], 00:10:07.122 "product_name": "Malloc disk", 00:10:07.122 "block_size": 512, 00:10:07.122 "num_blocks": 65536, 00:10:07.122 "uuid": "773de7d4-acd3-4173-b0c2-c3c65f079e5b", 00:10:07.122 "assigned_rate_limits": { 00:10:07.122 "rw_ios_per_sec": 0, 00:10:07.122 "rw_mbytes_per_sec": 0, 00:10:07.122 "r_mbytes_per_sec": 0, 00:10:07.122 "w_mbytes_per_sec": 0 00:10:07.122 }, 00:10:07.122 "claimed": true, 00:10:07.122 "claim_type": "exclusive_write", 00:10:07.122 "zoned": false, 00:10:07.122 "supported_io_types": { 00:10:07.122 "read": true, 00:10:07.122 "write": true, 00:10:07.122 "unmap": true, 00:10:07.122 "flush": true, 00:10:07.122 "reset": true, 00:10:07.122 "nvme_admin": false, 00:10:07.122 "nvme_io": false, 00:10:07.122 "nvme_io_md": false, 00:10:07.122 "write_zeroes": true, 00:10:07.122 "zcopy": true, 00:10:07.122 "get_zone_info": false, 00:10:07.122 "zone_management": false, 00:10:07.122 "zone_append": false, 00:10:07.122 "compare": false, 00:10:07.122 "compare_and_write": false, 00:10:07.122 "abort": true, 00:10:07.122 "seek_hole": false, 00:10:07.122 "seek_data": false, 00:10:07.122 "copy": true, 00:10:07.122 "nvme_iov_md": false 00:10:07.122 }, 00:10:07.122 "memory_domains": [ 00:10:07.122 { 00:10:07.122 "dma_device_id": "system", 00:10:07.123 "dma_device_type": 1 00:10:07.123 }, 00:10:07.123 { 00:10:07.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:07.123 "dma_device_type": 2 00:10:07.123 } 00:10:07.123 ], 00:10:07.123 "driver_specific": {} 00:10:07.123 } 00:10:07.123 ] 00:10:07.123 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.123 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:07.123 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:07.123 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.123 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.123 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.123 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.123 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.123 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.123 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.123 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.123 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.123 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.123 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.123 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.123 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.123 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.123 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.123 "name": "Existed_Raid", 00:10:07.123 "uuid": "19edde28-5c9f-41ae-9290-cd16019e8226", 00:10:07.123 "strip_size_kb": 64, 00:10:07.123 "state": "configuring", 00:10:07.123 "raid_level": "raid0", 00:10:07.123 "superblock": true, 00:10:07.123 "num_base_bdevs": 3, 00:10:07.123 "num_base_bdevs_discovered": 1, 00:10:07.123 "num_base_bdevs_operational": 3, 00:10:07.123 "base_bdevs_list": [ 00:10:07.123 { 00:10:07.123 "name": "BaseBdev1", 00:10:07.123 "uuid": "773de7d4-acd3-4173-b0c2-c3c65f079e5b", 00:10:07.123 "is_configured": true, 00:10:07.123 "data_offset": 2048, 00:10:07.123 "data_size": 63488 00:10:07.123 }, 00:10:07.123 { 00:10:07.123 "name": "BaseBdev2", 00:10:07.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.123 "is_configured": false, 00:10:07.123 "data_offset": 0, 00:10:07.123 "data_size": 0 00:10:07.123 }, 00:10:07.123 { 00:10:07.123 "name": "BaseBdev3", 00:10:07.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.123 "is_configured": false, 00:10:07.123 "data_offset": 0, 00:10:07.123 "data_size": 0 00:10:07.123 } 00:10:07.123 ] 00:10:07.123 }' 00:10:07.123 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.123 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.691 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:07.691 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.691 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.691 [2024-11-27 14:09:44.682485] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:07.691 [2024-11-27 14:09:44.682545] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:07.691 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.691 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:07.691 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.691 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.691 [2024-11-27 14:09:44.690537] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:07.691 [2024-11-27 14:09:44.693098] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:07.691 [2024-11-27 14:09:44.693182] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:07.691 [2024-11-27 14:09:44.693214] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:07.691 [2024-11-27 14:09:44.693243] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:07.691 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.691 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:07.691 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:07.691 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:07.691 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:07.691 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:07.691 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:07.691 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:07.691 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:07.692 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:07.692 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:07.692 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:07.692 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:07.692 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:07.692 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:07.692 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.692 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.692 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:07.692 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:07.692 "name": "Existed_Raid", 00:10:07.692 "uuid": "93350a59-5bc9-4253-8fe2-21d83374f474", 00:10:07.692 "strip_size_kb": 64, 00:10:07.692 "state": "configuring", 00:10:07.692 "raid_level": "raid0", 00:10:07.692 "superblock": true, 00:10:07.692 "num_base_bdevs": 3, 00:10:07.692 "num_base_bdevs_discovered": 1, 00:10:07.692 "num_base_bdevs_operational": 3, 00:10:07.692 "base_bdevs_list": [ 00:10:07.692 { 00:10:07.692 "name": "BaseBdev1", 00:10:07.692 "uuid": "773de7d4-acd3-4173-b0c2-c3c65f079e5b", 00:10:07.692 "is_configured": true, 00:10:07.692 "data_offset": 2048, 00:10:07.692 "data_size": 63488 00:10:07.692 }, 00:10:07.692 { 00:10:07.692 "name": "BaseBdev2", 00:10:07.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.692 "is_configured": false, 00:10:07.692 "data_offset": 0, 00:10:07.692 "data_size": 0 00:10:07.692 }, 00:10:07.692 { 00:10:07.692 "name": "BaseBdev3", 00:10:07.692 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:07.692 "is_configured": false, 00:10:07.692 "data_offset": 0, 00:10:07.692 "data_size": 0 00:10:07.692 } 00:10:07.692 ] 00:10:07.692 }' 00:10:07.692 14:09:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:07.692 14:09:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:07.952 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:07.952 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:07.952 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.212 [2024-11-27 14:09:45.255148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:08.212 BaseBdev2 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.212 [ 00:10:08.212 { 00:10:08.212 "name": "BaseBdev2", 00:10:08.212 "aliases": [ 00:10:08.212 "4c3b7fde-53ea-499b-8ac1-ff5964249472" 00:10:08.212 ], 00:10:08.212 "product_name": "Malloc disk", 00:10:08.212 "block_size": 512, 00:10:08.212 "num_blocks": 65536, 00:10:08.212 "uuid": "4c3b7fde-53ea-499b-8ac1-ff5964249472", 00:10:08.212 "assigned_rate_limits": { 00:10:08.212 "rw_ios_per_sec": 0, 00:10:08.212 "rw_mbytes_per_sec": 0, 00:10:08.212 "r_mbytes_per_sec": 0, 00:10:08.212 "w_mbytes_per_sec": 0 00:10:08.212 }, 00:10:08.212 "claimed": true, 00:10:08.212 "claim_type": "exclusive_write", 00:10:08.212 "zoned": false, 00:10:08.212 "supported_io_types": { 00:10:08.212 "read": true, 00:10:08.212 "write": true, 00:10:08.212 "unmap": true, 00:10:08.212 "flush": true, 00:10:08.212 "reset": true, 00:10:08.212 "nvme_admin": false, 00:10:08.212 "nvme_io": false, 00:10:08.212 "nvme_io_md": false, 00:10:08.212 "write_zeroes": true, 00:10:08.212 "zcopy": true, 00:10:08.212 "get_zone_info": false, 00:10:08.212 "zone_management": false, 00:10:08.212 "zone_append": false, 00:10:08.212 "compare": false, 00:10:08.212 "compare_and_write": false, 00:10:08.212 "abort": true, 00:10:08.212 "seek_hole": false, 00:10:08.212 "seek_data": false, 00:10:08.212 "copy": true, 00:10:08.212 "nvme_iov_md": false 00:10:08.212 }, 00:10:08.212 "memory_domains": [ 00:10:08.212 { 00:10:08.212 "dma_device_id": "system", 00:10:08.212 "dma_device_type": 1 00:10:08.212 }, 00:10:08.212 { 00:10:08.212 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.212 "dma_device_type": 2 00:10:08.212 } 00:10:08.212 ], 00:10:08.212 "driver_specific": {} 00:10:08.212 } 00:10:08.212 ] 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.212 "name": "Existed_Raid", 00:10:08.212 "uuid": "93350a59-5bc9-4253-8fe2-21d83374f474", 00:10:08.212 "strip_size_kb": 64, 00:10:08.212 "state": "configuring", 00:10:08.212 "raid_level": "raid0", 00:10:08.212 "superblock": true, 00:10:08.212 "num_base_bdevs": 3, 00:10:08.212 "num_base_bdevs_discovered": 2, 00:10:08.212 "num_base_bdevs_operational": 3, 00:10:08.212 "base_bdevs_list": [ 00:10:08.212 { 00:10:08.212 "name": "BaseBdev1", 00:10:08.212 "uuid": "773de7d4-acd3-4173-b0c2-c3c65f079e5b", 00:10:08.212 "is_configured": true, 00:10:08.212 "data_offset": 2048, 00:10:08.212 "data_size": 63488 00:10:08.212 }, 00:10:08.212 { 00:10:08.212 "name": "BaseBdev2", 00:10:08.212 "uuid": "4c3b7fde-53ea-499b-8ac1-ff5964249472", 00:10:08.212 "is_configured": true, 00:10:08.212 "data_offset": 2048, 00:10:08.212 "data_size": 63488 00:10:08.212 }, 00:10:08.212 { 00:10:08.212 "name": "BaseBdev3", 00:10:08.212 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:08.212 "is_configured": false, 00:10:08.212 "data_offset": 0, 00:10:08.212 "data_size": 0 00:10:08.212 } 00:10:08.212 ] 00:10:08.212 }' 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.212 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.780 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:08.780 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.780 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.780 [2024-11-27 14:09:45.860489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:08.780 [2024-11-27 14:09:45.860888] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:08.780 [2024-11-27 14:09:45.860921] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:08.780 [2024-11-27 14:09:45.861355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:08.780 BaseBdev3 00:10:08.780 [2024-11-27 14:09:45.861584] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:08.780 [2024-11-27 14:09:45.861828] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:08.780 [2024-11-27 14:09:45.862097] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.780 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.780 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:08.780 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:08.780 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:08.780 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:08.780 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:08.780 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:08.780 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:08.780 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.780 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.780 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.780 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:08.780 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.780 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.780 [ 00:10:08.780 { 00:10:08.780 "name": "BaseBdev3", 00:10:08.780 "aliases": [ 00:10:08.780 "002236bc-93b9-472d-9a13-3c13cc651db4" 00:10:08.780 ], 00:10:08.780 "product_name": "Malloc disk", 00:10:08.780 "block_size": 512, 00:10:08.780 "num_blocks": 65536, 00:10:08.780 "uuid": "002236bc-93b9-472d-9a13-3c13cc651db4", 00:10:08.780 "assigned_rate_limits": { 00:10:08.780 "rw_ios_per_sec": 0, 00:10:08.780 "rw_mbytes_per_sec": 0, 00:10:08.780 "r_mbytes_per_sec": 0, 00:10:08.780 "w_mbytes_per_sec": 0 00:10:08.780 }, 00:10:08.780 "claimed": true, 00:10:08.780 "claim_type": "exclusive_write", 00:10:08.780 "zoned": false, 00:10:08.780 "supported_io_types": { 00:10:08.781 "read": true, 00:10:08.781 "write": true, 00:10:08.781 "unmap": true, 00:10:08.781 "flush": true, 00:10:08.781 "reset": true, 00:10:08.781 "nvme_admin": false, 00:10:08.781 "nvme_io": false, 00:10:08.781 "nvme_io_md": false, 00:10:08.781 "write_zeroes": true, 00:10:08.781 "zcopy": true, 00:10:08.781 "get_zone_info": false, 00:10:08.781 "zone_management": false, 00:10:08.781 "zone_append": false, 00:10:08.781 "compare": false, 00:10:08.781 "compare_and_write": false, 00:10:08.781 "abort": true, 00:10:08.781 "seek_hole": false, 00:10:08.781 "seek_data": false, 00:10:08.781 "copy": true, 00:10:08.781 "nvme_iov_md": false 00:10:08.781 }, 00:10:08.781 "memory_domains": [ 00:10:08.781 { 00:10:08.781 "dma_device_id": "system", 00:10:08.781 "dma_device_type": 1 00:10:08.781 }, 00:10:08.781 { 00:10:08.781 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:08.781 "dma_device_type": 2 00:10:08.781 } 00:10:08.781 ], 00:10:08.781 "driver_specific": {} 00:10:08.781 } 00:10:08.781 ] 00:10:08.781 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.781 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:08.781 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:08.781 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:08.781 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:08.781 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:08.781 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:08.781 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:08.781 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:08.781 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:08.781 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:08.781 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:08.781 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:08.781 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:08.781 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:08.781 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:08.781 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:08.781 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:08.781 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:08.781 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:08.781 "name": "Existed_Raid", 00:10:08.781 "uuid": "93350a59-5bc9-4253-8fe2-21d83374f474", 00:10:08.781 "strip_size_kb": 64, 00:10:08.781 "state": "online", 00:10:08.781 "raid_level": "raid0", 00:10:08.781 "superblock": true, 00:10:08.781 "num_base_bdevs": 3, 00:10:08.781 "num_base_bdevs_discovered": 3, 00:10:08.781 "num_base_bdevs_operational": 3, 00:10:08.781 "base_bdevs_list": [ 00:10:08.781 { 00:10:08.781 "name": "BaseBdev1", 00:10:08.781 "uuid": "773de7d4-acd3-4173-b0c2-c3c65f079e5b", 00:10:08.781 "is_configured": true, 00:10:08.781 "data_offset": 2048, 00:10:08.781 "data_size": 63488 00:10:08.781 }, 00:10:08.781 { 00:10:08.781 "name": "BaseBdev2", 00:10:08.781 "uuid": "4c3b7fde-53ea-499b-8ac1-ff5964249472", 00:10:08.781 "is_configured": true, 00:10:08.781 "data_offset": 2048, 00:10:08.781 "data_size": 63488 00:10:08.781 }, 00:10:08.781 { 00:10:08.781 "name": "BaseBdev3", 00:10:08.781 "uuid": "002236bc-93b9-472d-9a13-3c13cc651db4", 00:10:08.781 "is_configured": true, 00:10:08.781 "data_offset": 2048, 00:10:08.781 "data_size": 63488 00:10:08.781 } 00:10:08.781 ] 00:10:08.781 }' 00:10:08.781 14:09:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:08.781 14:09:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.348 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:09.348 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:09.348 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:09.348 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:09.348 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:09.348 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:09.348 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:09.348 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:09.348 14:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.348 14:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.348 [2024-11-27 14:09:46.437126] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:09.348 14:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.348 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:09.348 "name": "Existed_Raid", 00:10:09.348 "aliases": [ 00:10:09.348 "93350a59-5bc9-4253-8fe2-21d83374f474" 00:10:09.348 ], 00:10:09.348 "product_name": "Raid Volume", 00:10:09.348 "block_size": 512, 00:10:09.348 "num_blocks": 190464, 00:10:09.348 "uuid": "93350a59-5bc9-4253-8fe2-21d83374f474", 00:10:09.348 "assigned_rate_limits": { 00:10:09.348 "rw_ios_per_sec": 0, 00:10:09.348 "rw_mbytes_per_sec": 0, 00:10:09.348 "r_mbytes_per_sec": 0, 00:10:09.348 "w_mbytes_per_sec": 0 00:10:09.348 }, 00:10:09.348 "claimed": false, 00:10:09.348 "zoned": false, 00:10:09.348 "supported_io_types": { 00:10:09.348 "read": true, 00:10:09.348 "write": true, 00:10:09.348 "unmap": true, 00:10:09.348 "flush": true, 00:10:09.348 "reset": true, 00:10:09.348 "nvme_admin": false, 00:10:09.348 "nvme_io": false, 00:10:09.348 "nvme_io_md": false, 00:10:09.348 "write_zeroes": true, 00:10:09.348 "zcopy": false, 00:10:09.348 "get_zone_info": false, 00:10:09.348 "zone_management": false, 00:10:09.348 "zone_append": false, 00:10:09.348 "compare": false, 00:10:09.348 "compare_and_write": false, 00:10:09.348 "abort": false, 00:10:09.348 "seek_hole": false, 00:10:09.348 "seek_data": false, 00:10:09.348 "copy": false, 00:10:09.348 "nvme_iov_md": false 00:10:09.348 }, 00:10:09.348 "memory_domains": [ 00:10:09.348 { 00:10:09.348 "dma_device_id": "system", 00:10:09.348 "dma_device_type": 1 00:10:09.348 }, 00:10:09.348 { 00:10:09.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.348 "dma_device_type": 2 00:10:09.348 }, 00:10:09.348 { 00:10:09.348 "dma_device_id": "system", 00:10:09.348 "dma_device_type": 1 00:10:09.348 }, 00:10:09.348 { 00:10:09.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.348 "dma_device_type": 2 00:10:09.348 }, 00:10:09.348 { 00:10:09.348 "dma_device_id": "system", 00:10:09.348 "dma_device_type": 1 00:10:09.348 }, 00:10:09.348 { 00:10:09.348 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.348 "dma_device_type": 2 00:10:09.348 } 00:10:09.348 ], 00:10:09.348 "driver_specific": { 00:10:09.348 "raid": { 00:10:09.348 "uuid": "93350a59-5bc9-4253-8fe2-21d83374f474", 00:10:09.348 "strip_size_kb": 64, 00:10:09.348 "state": "online", 00:10:09.348 "raid_level": "raid0", 00:10:09.348 "superblock": true, 00:10:09.348 "num_base_bdevs": 3, 00:10:09.348 "num_base_bdevs_discovered": 3, 00:10:09.348 "num_base_bdevs_operational": 3, 00:10:09.348 "base_bdevs_list": [ 00:10:09.348 { 00:10:09.348 "name": "BaseBdev1", 00:10:09.348 "uuid": "773de7d4-acd3-4173-b0c2-c3c65f079e5b", 00:10:09.348 "is_configured": true, 00:10:09.348 "data_offset": 2048, 00:10:09.348 "data_size": 63488 00:10:09.348 }, 00:10:09.348 { 00:10:09.348 "name": "BaseBdev2", 00:10:09.348 "uuid": "4c3b7fde-53ea-499b-8ac1-ff5964249472", 00:10:09.348 "is_configured": true, 00:10:09.348 "data_offset": 2048, 00:10:09.348 "data_size": 63488 00:10:09.348 }, 00:10:09.348 { 00:10:09.348 "name": "BaseBdev3", 00:10:09.348 "uuid": "002236bc-93b9-472d-9a13-3c13cc651db4", 00:10:09.348 "is_configured": true, 00:10:09.348 "data_offset": 2048, 00:10:09.348 "data_size": 63488 00:10:09.348 } 00:10:09.348 ] 00:10:09.348 } 00:10:09.348 } 00:10:09.348 }' 00:10:09.348 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:09.348 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:09.348 BaseBdev2 00:10:09.348 BaseBdev3' 00:10:09.348 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.348 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:09.348 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.348 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:09.348 14:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.348 14:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.348 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.348 14:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.607 [2024-11-27 14:09:46.764992] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:09.607 [2024-11-27 14:09:46.765028] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:09.607 [2024-11-27 14:09:46.765102] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:09.607 14:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:09.865 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:09.865 "name": "Existed_Raid", 00:10:09.865 "uuid": "93350a59-5bc9-4253-8fe2-21d83374f474", 00:10:09.865 "strip_size_kb": 64, 00:10:09.865 "state": "offline", 00:10:09.865 "raid_level": "raid0", 00:10:09.865 "superblock": true, 00:10:09.865 "num_base_bdevs": 3, 00:10:09.865 "num_base_bdevs_discovered": 2, 00:10:09.865 "num_base_bdevs_operational": 2, 00:10:09.865 "base_bdevs_list": [ 00:10:09.865 { 00:10:09.865 "name": null, 00:10:09.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:09.865 "is_configured": false, 00:10:09.865 "data_offset": 0, 00:10:09.865 "data_size": 63488 00:10:09.865 }, 00:10:09.866 { 00:10:09.866 "name": "BaseBdev2", 00:10:09.866 "uuid": "4c3b7fde-53ea-499b-8ac1-ff5964249472", 00:10:09.866 "is_configured": true, 00:10:09.866 "data_offset": 2048, 00:10:09.866 "data_size": 63488 00:10:09.866 }, 00:10:09.866 { 00:10:09.866 "name": "BaseBdev3", 00:10:09.866 "uuid": "002236bc-93b9-472d-9a13-3c13cc651db4", 00:10:09.866 "is_configured": true, 00:10:09.866 "data_offset": 2048, 00:10:09.866 "data_size": 63488 00:10:09.866 } 00:10:09.866 ] 00:10:09.866 }' 00:10:09.866 14:09:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:09.866 14:09:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.124 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:10.124 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:10.124 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.124 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.124 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.124 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:10.124 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.381 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:10.381 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:10.381 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:10.381 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.381 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.381 [2024-11-27 14:09:47.439722] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:10.381 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.381 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:10.381 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:10.381 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.381 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:10.381 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.381 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.381 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.381 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:10.381 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:10.381 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:10.381 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.381 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.381 [2024-11-27 14:09:47.580762] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:10.381 [2024-11-27 14:09:47.580857] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:10.646 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.646 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:10.646 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:10.646 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.646 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.646 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.646 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:10.646 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.646 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:10.646 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:10.646 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:10.646 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:10.646 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:10.646 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:10.646 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.646 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.646 BaseBdev2 00:10:10.646 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.646 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:10.646 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:10.646 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.647 [ 00:10:10.647 { 00:10:10.647 "name": "BaseBdev2", 00:10:10.647 "aliases": [ 00:10:10.647 "1fc470fb-68b4-49da-810e-2b85ed4b815f" 00:10:10.647 ], 00:10:10.647 "product_name": "Malloc disk", 00:10:10.647 "block_size": 512, 00:10:10.647 "num_blocks": 65536, 00:10:10.647 "uuid": "1fc470fb-68b4-49da-810e-2b85ed4b815f", 00:10:10.647 "assigned_rate_limits": { 00:10:10.647 "rw_ios_per_sec": 0, 00:10:10.647 "rw_mbytes_per_sec": 0, 00:10:10.647 "r_mbytes_per_sec": 0, 00:10:10.647 "w_mbytes_per_sec": 0 00:10:10.647 }, 00:10:10.647 "claimed": false, 00:10:10.647 "zoned": false, 00:10:10.647 "supported_io_types": { 00:10:10.647 "read": true, 00:10:10.647 "write": true, 00:10:10.647 "unmap": true, 00:10:10.647 "flush": true, 00:10:10.647 "reset": true, 00:10:10.647 "nvme_admin": false, 00:10:10.647 "nvme_io": false, 00:10:10.647 "nvme_io_md": false, 00:10:10.647 "write_zeroes": true, 00:10:10.647 "zcopy": true, 00:10:10.647 "get_zone_info": false, 00:10:10.647 "zone_management": false, 00:10:10.647 "zone_append": false, 00:10:10.647 "compare": false, 00:10:10.647 "compare_and_write": false, 00:10:10.647 "abort": true, 00:10:10.647 "seek_hole": false, 00:10:10.647 "seek_data": false, 00:10:10.647 "copy": true, 00:10:10.647 "nvme_iov_md": false 00:10:10.647 }, 00:10:10.647 "memory_domains": [ 00:10:10.647 { 00:10:10.647 "dma_device_id": "system", 00:10:10.647 "dma_device_type": 1 00:10:10.647 }, 00:10:10.647 { 00:10:10.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.647 "dma_device_type": 2 00:10:10.647 } 00:10:10.647 ], 00:10:10.647 "driver_specific": {} 00:10:10.647 } 00:10:10.647 ] 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.647 BaseBdev3 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.647 [ 00:10:10.647 { 00:10:10.647 "name": "BaseBdev3", 00:10:10.647 "aliases": [ 00:10:10.647 "a4194fde-22c4-4866-9aa5-ad9811f41d2a" 00:10:10.647 ], 00:10:10.647 "product_name": "Malloc disk", 00:10:10.647 "block_size": 512, 00:10:10.647 "num_blocks": 65536, 00:10:10.647 "uuid": "a4194fde-22c4-4866-9aa5-ad9811f41d2a", 00:10:10.647 "assigned_rate_limits": { 00:10:10.647 "rw_ios_per_sec": 0, 00:10:10.647 "rw_mbytes_per_sec": 0, 00:10:10.647 "r_mbytes_per_sec": 0, 00:10:10.647 "w_mbytes_per_sec": 0 00:10:10.647 }, 00:10:10.647 "claimed": false, 00:10:10.647 "zoned": false, 00:10:10.647 "supported_io_types": { 00:10:10.647 "read": true, 00:10:10.647 "write": true, 00:10:10.647 "unmap": true, 00:10:10.647 "flush": true, 00:10:10.647 "reset": true, 00:10:10.647 "nvme_admin": false, 00:10:10.647 "nvme_io": false, 00:10:10.647 "nvme_io_md": false, 00:10:10.647 "write_zeroes": true, 00:10:10.647 "zcopy": true, 00:10:10.647 "get_zone_info": false, 00:10:10.647 "zone_management": false, 00:10:10.647 "zone_append": false, 00:10:10.647 "compare": false, 00:10:10.647 "compare_and_write": false, 00:10:10.647 "abort": true, 00:10:10.647 "seek_hole": false, 00:10:10.647 "seek_data": false, 00:10:10.647 "copy": true, 00:10:10.647 "nvme_iov_md": false 00:10:10.647 }, 00:10:10.647 "memory_domains": [ 00:10:10.647 { 00:10:10.647 "dma_device_id": "system", 00:10:10.647 "dma_device_type": 1 00:10:10.647 }, 00:10:10.647 { 00:10:10.647 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.647 "dma_device_type": 2 00:10:10.647 } 00:10:10.647 ], 00:10:10.647 "driver_specific": {} 00:10:10.647 } 00:10:10.647 ] 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.647 [2024-11-27 14:09:47.876651] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:10.647 [2024-11-27 14:09:47.876709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:10.647 [2024-11-27 14:09:47.876742] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:10.647 [2024-11-27 14:09:47.879182] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:10.647 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:10.906 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:10.906 "name": "Existed_Raid", 00:10:10.906 "uuid": "45f906a4-9aab-498e-8445-30a172bbe4cd", 00:10:10.906 "strip_size_kb": 64, 00:10:10.906 "state": "configuring", 00:10:10.906 "raid_level": "raid0", 00:10:10.906 "superblock": true, 00:10:10.906 "num_base_bdevs": 3, 00:10:10.906 "num_base_bdevs_discovered": 2, 00:10:10.906 "num_base_bdevs_operational": 3, 00:10:10.906 "base_bdevs_list": [ 00:10:10.906 { 00:10:10.906 "name": "BaseBdev1", 00:10:10.906 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:10.906 "is_configured": false, 00:10:10.906 "data_offset": 0, 00:10:10.906 "data_size": 0 00:10:10.906 }, 00:10:10.906 { 00:10:10.906 "name": "BaseBdev2", 00:10:10.906 "uuid": "1fc470fb-68b4-49da-810e-2b85ed4b815f", 00:10:10.906 "is_configured": true, 00:10:10.906 "data_offset": 2048, 00:10:10.906 "data_size": 63488 00:10:10.906 }, 00:10:10.906 { 00:10:10.906 "name": "BaseBdev3", 00:10:10.906 "uuid": "a4194fde-22c4-4866-9aa5-ad9811f41d2a", 00:10:10.906 "is_configured": true, 00:10:10.906 "data_offset": 2048, 00:10:10.906 "data_size": 63488 00:10:10.906 } 00:10:10.906 ] 00:10:10.906 }' 00:10:10.906 14:09:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:10.906 14:09:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.164 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:11.164 14:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.164 14:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.164 [2024-11-27 14:09:48.416892] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:11.164 14:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.164 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:11.164 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.164 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.164 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.164 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.164 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.164 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.164 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.164 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.164 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.164 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.164 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.164 14:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.164 14:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.164 14:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.422 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.422 "name": "Existed_Raid", 00:10:11.422 "uuid": "45f906a4-9aab-498e-8445-30a172bbe4cd", 00:10:11.422 "strip_size_kb": 64, 00:10:11.422 "state": "configuring", 00:10:11.422 "raid_level": "raid0", 00:10:11.422 "superblock": true, 00:10:11.423 "num_base_bdevs": 3, 00:10:11.423 "num_base_bdevs_discovered": 1, 00:10:11.423 "num_base_bdevs_operational": 3, 00:10:11.423 "base_bdevs_list": [ 00:10:11.423 { 00:10:11.423 "name": "BaseBdev1", 00:10:11.423 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:11.423 "is_configured": false, 00:10:11.423 "data_offset": 0, 00:10:11.423 "data_size": 0 00:10:11.423 }, 00:10:11.423 { 00:10:11.423 "name": null, 00:10:11.423 "uuid": "1fc470fb-68b4-49da-810e-2b85ed4b815f", 00:10:11.423 "is_configured": false, 00:10:11.423 "data_offset": 0, 00:10:11.423 "data_size": 63488 00:10:11.423 }, 00:10:11.423 { 00:10:11.423 "name": "BaseBdev3", 00:10:11.423 "uuid": "a4194fde-22c4-4866-9aa5-ad9811f41d2a", 00:10:11.423 "is_configured": true, 00:10:11.423 "data_offset": 2048, 00:10:11.423 "data_size": 63488 00:10:11.423 } 00:10:11.423 ] 00:10:11.423 }' 00:10:11.423 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.423 14:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.990 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:11.990 14:09:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.990 14:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.990 14:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.990 14:09:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.990 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:11.990 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:11.990 14:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.990 14:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.990 [2024-11-27 14:09:49.050896] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:11.990 BaseBdev1 00:10:11.990 14:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.990 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:11.990 14:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:11.990 14:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:11.990 14:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:11.990 14:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:11.990 14:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:11.990 14:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:11.990 14:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.990 14:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.990 14:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.990 14:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:11.990 14:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.990 14:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.990 [ 00:10:11.990 { 00:10:11.990 "name": "BaseBdev1", 00:10:11.990 "aliases": [ 00:10:11.990 "4a6855e8-9943-4d1c-8e6e-1bfa5c6a57eb" 00:10:11.990 ], 00:10:11.990 "product_name": "Malloc disk", 00:10:11.990 "block_size": 512, 00:10:11.990 "num_blocks": 65536, 00:10:11.990 "uuid": "4a6855e8-9943-4d1c-8e6e-1bfa5c6a57eb", 00:10:11.990 "assigned_rate_limits": { 00:10:11.990 "rw_ios_per_sec": 0, 00:10:11.990 "rw_mbytes_per_sec": 0, 00:10:11.990 "r_mbytes_per_sec": 0, 00:10:11.990 "w_mbytes_per_sec": 0 00:10:11.990 }, 00:10:11.990 "claimed": true, 00:10:11.990 "claim_type": "exclusive_write", 00:10:11.990 "zoned": false, 00:10:11.990 "supported_io_types": { 00:10:11.990 "read": true, 00:10:11.990 "write": true, 00:10:11.990 "unmap": true, 00:10:11.990 "flush": true, 00:10:11.990 "reset": true, 00:10:11.990 "nvme_admin": false, 00:10:11.990 "nvme_io": false, 00:10:11.990 "nvme_io_md": false, 00:10:11.990 "write_zeroes": true, 00:10:11.990 "zcopy": true, 00:10:11.990 "get_zone_info": false, 00:10:11.990 "zone_management": false, 00:10:11.990 "zone_append": false, 00:10:11.990 "compare": false, 00:10:11.990 "compare_and_write": false, 00:10:11.990 "abort": true, 00:10:11.990 "seek_hole": false, 00:10:11.990 "seek_data": false, 00:10:11.990 "copy": true, 00:10:11.990 "nvme_iov_md": false 00:10:11.990 }, 00:10:11.990 "memory_domains": [ 00:10:11.990 { 00:10:11.990 "dma_device_id": "system", 00:10:11.990 "dma_device_type": 1 00:10:11.990 }, 00:10:11.990 { 00:10:11.990 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:11.990 "dma_device_type": 2 00:10:11.990 } 00:10:11.990 ], 00:10:11.990 "driver_specific": {} 00:10:11.990 } 00:10:11.990 ] 00:10:11.990 14:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.990 14:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:11.990 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:11.990 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:11.991 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:11.991 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:11.991 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:11.991 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:11.991 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:11.991 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:11.991 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:11.991 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:11.991 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:11.991 14:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:11.991 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:11.991 14:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:11.991 14:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:11.991 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:11.991 "name": "Existed_Raid", 00:10:11.991 "uuid": "45f906a4-9aab-498e-8445-30a172bbe4cd", 00:10:11.991 "strip_size_kb": 64, 00:10:11.991 "state": "configuring", 00:10:11.991 "raid_level": "raid0", 00:10:11.991 "superblock": true, 00:10:11.991 "num_base_bdevs": 3, 00:10:11.991 "num_base_bdevs_discovered": 2, 00:10:11.991 "num_base_bdevs_operational": 3, 00:10:11.991 "base_bdevs_list": [ 00:10:11.991 { 00:10:11.991 "name": "BaseBdev1", 00:10:11.991 "uuid": "4a6855e8-9943-4d1c-8e6e-1bfa5c6a57eb", 00:10:11.991 "is_configured": true, 00:10:11.991 "data_offset": 2048, 00:10:11.991 "data_size": 63488 00:10:11.991 }, 00:10:11.991 { 00:10:11.991 "name": null, 00:10:11.991 "uuid": "1fc470fb-68b4-49da-810e-2b85ed4b815f", 00:10:11.991 "is_configured": false, 00:10:11.991 "data_offset": 0, 00:10:11.991 "data_size": 63488 00:10:11.991 }, 00:10:11.991 { 00:10:11.991 "name": "BaseBdev3", 00:10:11.991 "uuid": "a4194fde-22c4-4866-9aa5-ad9811f41d2a", 00:10:11.991 "is_configured": true, 00:10:11.991 "data_offset": 2048, 00:10:11.991 "data_size": 63488 00:10:11.991 } 00:10:11.991 ] 00:10:11.991 }' 00:10:11.991 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:11.991 14:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.558 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.558 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:12.558 14:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.558 14:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.558 14:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.558 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:12.558 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:12.558 14:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.558 14:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.558 [2024-11-27 14:09:49.675135] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:12.558 14:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.558 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:12.558 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:12.558 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:12.558 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:12.558 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:12.558 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:12.558 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:12.558 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:12.558 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:12.559 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:12.559 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:12.559 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:12.559 14:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:12.559 14:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:12.559 14:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:12.559 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:12.559 "name": "Existed_Raid", 00:10:12.559 "uuid": "45f906a4-9aab-498e-8445-30a172bbe4cd", 00:10:12.559 "strip_size_kb": 64, 00:10:12.559 "state": "configuring", 00:10:12.559 "raid_level": "raid0", 00:10:12.559 "superblock": true, 00:10:12.559 "num_base_bdevs": 3, 00:10:12.559 "num_base_bdevs_discovered": 1, 00:10:12.559 "num_base_bdevs_operational": 3, 00:10:12.559 "base_bdevs_list": [ 00:10:12.559 { 00:10:12.559 "name": "BaseBdev1", 00:10:12.559 "uuid": "4a6855e8-9943-4d1c-8e6e-1bfa5c6a57eb", 00:10:12.559 "is_configured": true, 00:10:12.559 "data_offset": 2048, 00:10:12.559 "data_size": 63488 00:10:12.559 }, 00:10:12.559 { 00:10:12.559 "name": null, 00:10:12.559 "uuid": "1fc470fb-68b4-49da-810e-2b85ed4b815f", 00:10:12.559 "is_configured": false, 00:10:12.559 "data_offset": 0, 00:10:12.559 "data_size": 63488 00:10:12.559 }, 00:10:12.559 { 00:10:12.559 "name": null, 00:10:12.559 "uuid": "a4194fde-22c4-4866-9aa5-ad9811f41d2a", 00:10:12.559 "is_configured": false, 00:10:12.559 "data_offset": 0, 00:10:12.559 "data_size": 63488 00:10:12.559 } 00:10:12.559 ] 00:10:12.559 }' 00:10:12.559 14:09:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:12.559 14:09:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.126 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:13.126 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.126 14:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.126 14:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.126 14:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.126 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:13.126 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:13.126 14:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.126 14:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.126 [2024-11-27 14:09:50.239403] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:13.126 14:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.126 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:13.126 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.126 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.126 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.126 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.126 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.126 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.126 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.126 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.126 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.126 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.126 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.126 14:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.126 14:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.126 14:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.126 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.126 "name": "Existed_Raid", 00:10:13.126 "uuid": "45f906a4-9aab-498e-8445-30a172bbe4cd", 00:10:13.126 "strip_size_kb": 64, 00:10:13.126 "state": "configuring", 00:10:13.126 "raid_level": "raid0", 00:10:13.126 "superblock": true, 00:10:13.126 "num_base_bdevs": 3, 00:10:13.126 "num_base_bdevs_discovered": 2, 00:10:13.126 "num_base_bdevs_operational": 3, 00:10:13.126 "base_bdevs_list": [ 00:10:13.126 { 00:10:13.126 "name": "BaseBdev1", 00:10:13.126 "uuid": "4a6855e8-9943-4d1c-8e6e-1bfa5c6a57eb", 00:10:13.126 "is_configured": true, 00:10:13.126 "data_offset": 2048, 00:10:13.126 "data_size": 63488 00:10:13.126 }, 00:10:13.126 { 00:10:13.126 "name": null, 00:10:13.126 "uuid": "1fc470fb-68b4-49da-810e-2b85ed4b815f", 00:10:13.126 "is_configured": false, 00:10:13.126 "data_offset": 0, 00:10:13.126 "data_size": 63488 00:10:13.126 }, 00:10:13.126 { 00:10:13.126 "name": "BaseBdev3", 00:10:13.126 "uuid": "a4194fde-22c4-4866-9aa5-ad9811f41d2a", 00:10:13.126 "is_configured": true, 00:10:13.126 "data_offset": 2048, 00:10:13.126 "data_size": 63488 00:10:13.126 } 00:10:13.126 ] 00:10:13.126 }' 00:10:13.126 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.126 14:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.695 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.695 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:13.695 14:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.695 14:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.695 14:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.695 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:13.695 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:13.695 14:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.695 14:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.695 [2024-11-27 14:09:50.835593] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:13.695 14:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.695 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:13.695 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:13.695 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:13.695 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:13.695 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:13.695 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:13.695 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:13.695 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:13.695 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:13.695 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:13.695 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:13.695 14:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:13.695 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:13.695 14:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:13.695 14:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:13.954 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:13.954 "name": "Existed_Raid", 00:10:13.954 "uuid": "45f906a4-9aab-498e-8445-30a172bbe4cd", 00:10:13.954 "strip_size_kb": 64, 00:10:13.954 "state": "configuring", 00:10:13.954 "raid_level": "raid0", 00:10:13.954 "superblock": true, 00:10:13.954 "num_base_bdevs": 3, 00:10:13.954 "num_base_bdevs_discovered": 1, 00:10:13.954 "num_base_bdevs_operational": 3, 00:10:13.954 "base_bdevs_list": [ 00:10:13.954 { 00:10:13.954 "name": null, 00:10:13.954 "uuid": "4a6855e8-9943-4d1c-8e6e-1bfa5c6a57eb", 00:10:13.954 "is_configured": false, 00:10:13.954 "data_offset": 0, 00:10:13.954 "data_size": 63488 00:10:13.954 }, 00:10:13.954 { 00:10:13.954 "name": null, 00:10:13.954 "uuid": "1fc470fb-68b4-49da-810e-2b85ed4b815f", 00:10:13.954 "is_configured": false, 00:10:13.954 "data_offset": 0, 00:10:13.954 "data_size": 63488 00:10:13.954 }, 00:10:13.954 { 00:10:13.954 "name": "BaseBdev3", 00:10:13.954 "uuid": "a4194fde-22c4-4866-9aa5-ad9811f41d2a", 00:10:13.954 "is_configured": true, 00:10:13.954 "data_offset": 2048, 00:10:13.954 "data_size": 63488 00:10:13.954 } 00:10:13.954 ] 00:10:13.954 }' 00:10:13.954 14:09:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:13.954 14:09:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.212 14:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.212 14:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:14.212 14:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.212 14:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.212 14:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.470 14:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:14.470 14:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:14.470 14:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.470 14:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.470 [2024-11-27 14:09:51.513075] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:14.470 14:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.470 14:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:10:14.470 14:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:14.470 14:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:14.470 14:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:14.470 14:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:14.470 14:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:14.470 14:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:14.470 14:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:14.470 14:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:14.470 14:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:14.470 14:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:14.470 14:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.470 14:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:14.470 14:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:14.470 14:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.470 14:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:14.471 "name": "Existed_Raid", 00:10:14.471 "uuid": "45f906a4-9aab-498e-8445-30a172bbe4cd", 00:10:14.471 "strip_size_kb": 64, 00:10:14.471 "state": "configuring", 00:10:14.471 "raid_level": "raid0", 00:10:14.471 "superblock": true, 00:10:14.471 "num_base_bdevs": 3, 00:10:14.471 "num_base_bdevs_discovered": 2, 00:10:14.471 "num_base_bdevs_operational": 3, 00:10:14.471 "base_bdevs_list": [ 00:10:14.471 { 00:10:14.471 "name": null, 00:10:14.471 "uuid": "4a6855e8-9943-4d1c-8e6e-1bfa5c6a57eb", 00:10:14.471 "is_configured": false, 00:10:14.471 "data_offset": 0, 00:10:14.471 "data_size": 63488 00:10:14.471 }, 00:10:14.471 { 00:10:14.471 "name": "BaseBdev2", 00:10:14.471 "uuid": "1fc470fb-68b4-49da-810e-2b85ed4b815f", 00:10:14.471 "is_configured": true, 00:10:14.471 "data_offset": 2048, 00:10:14.471 "data_size": 63488 00:10:14.471 }, 00:10:14.471 { 00:10:14.471 "name": "BaseBdev3", 00:10:14.471 "uuid": "a4194fde-22c4-4866-9aa5-ad9811f41d2a", 00:10:14.471 "is_configured": true, 00:10:14.471 "data_offset": 2048, 00:10:14.471 "data_size": 63488 00:10:14.471 } 00:10:14.471 ] 00:10:14.471 }' 00:10:14.471 14:09:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:14.471 14:09:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.039 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:15.039 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.039 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.039 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.039 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.039 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:15.039 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:15.039 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.039 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.039 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.039 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 4a6855e8-9943-4d1c-8e6e-1bfa5c6a57eb 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.040 [2024-11-27 14:09:52.201912] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:15.040 [2024-11-27 14:09:52.202227] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:15.040 [2024-11-27 14:09:52.202266] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:15.040 NewBaseBdev 00:10:15.040 [2024-11-27 14:09:52.202645] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:15.040 [2024-11-27 14:09:52.202848] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:15.040 [2024-11-27 14:09:52.202866] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:15.040 [2024-11-27 14:09:52.203037] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.040 [ 00:10:15.040 { 00:10:15.040 "name": "NewBaseBdev", 00:10:15.040 "aliases": [ 00:10:15.040 "4a6855e8-9943-4d1c-8e6e-1bfa5c6a57eb" 00:10:15.040 ], 00:10:15.040 "product_name": "Malloc disk", 00:10:15.040 "block_size": 512, 00:10:15.040 "num_blocks": 65536, 00:10:15.040 "uuid": "4a6855e8-9943-4d1c-8e6e-1bfa5c6a57eb", 00:10:15.040 "assigned_rate_limits": { 00:10:15.040 "rw_ios_per_sec": 0, 00:10:15.040 "rw_mbytes_per_sec": 0, 00:10:15.040 "r_mbytes_per_sec": 0, 00:10:15.040 "w_mbytes_per_sec": 0 00:10:15.040 }, 00:10:15.040 "claimed": true, 00:10:15.040 "claim_type": "exclusive_write", 00:10:15.040 "zoned": false, 00:10:15.040 "supported_io_types": { 00:10:15.040 "read": true, 00:10:15.040 "write": true, 00:10:15.040 "unmap": true, 00:10:15.040 "flush": true, 00:10:15.040 "reset": true, 00:10:15.040 "nvme_admin": false, 00:10:15.040 "nvme_io": false, 00:10:15.040 "nvme_io_md": false, 00:10:15.040 "write_zeroes": true, 00:10:15.040 "zcopy": true, 00:10:15.040 "get_zone_info": false, 00:10:15.040 "zone_management": false, 00:10:15.040 "zone_append": false, 00:10:15.040 "compare": false, 00:10:15.040 "compare_and_write": false, 00:10:15.040 "abort": true, 00:10:15.040 "seek_hole": false, 00:10:15.040 "seek_data": false, 00:10:15.040 "copy": true, 00:10:15.040 "nvme_iov_md": false 00:10:15.040 }, 00:10:15.040 "memory_domains": [ 00:10:15.040 { 00:10:15.040 "dma_device_id": "system", 00:10:15.040 "dma_device_type": 1 00:10:15.040 }, 00:10:15.040 { 00:10:15.040 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.040 "dma_device_type": 2 00:10:15.040 } 00:10:15.040 ], 00:10:15.040 "driver_specific": {} 00:10:15.040 } 00:10:15.040 ] 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:15.040 "name": "Existed_Raid", 00:10:15.040 "uuid": "45f906a4-9aab-498e-8445-30a172bbe4cd", 00:10:15.040 "strip_size_kb": 64, 00:10:15.040 "state": "online", 00:10:15.040 "raid_level": "raid0", 00:10:15.040 "superblock": true, 00:10:15.040 "num_base_bdevs": 3, 00:10:15.040 "num_base_bdevs_discovered": 3, 00:10:15.040 "num_base_bdevs_operational": 3, 00:10:15.040 "base_bdevs_list": [ 00:10:15.040 { 00:10:15.040 "name": "NewBaseBdev", 00:10:15.040 "uuid": "4a6855e8-9943-4d1c-8e6e-1bfa5c6a57eb", 00:10:15.040 "is_configured": true, 00:10:15.040 "data_offset": 2048, 00:10:15.040 "data_size": 63488 00:10:15.040 }, 00:10:15.040 { 00:10:15.040 "name": "BaseBdev2", 00:10:15.040 "uuid": "1fc470fb-68b4-49da-810e-2b85ed4b815f", 00:10:15.040 "is_configured": true, 00:10:15.040 "data_offset": 2048, 00:10:15.040 "data_size": 63488 00:10:15.040 }, 00:10:15.040 { 00:10:15.040 "name": "BaseBdev3", 00:10:15.040 "uuid": "a4194fde-22c4-4866-9aa5-ad9811f41d2a", 00:10:15.040 "is_configured": true, 00:10:15.040 "data_offset": 2048, 00:10:15.040 "data_size": 63488 00:10:15.040 } 00:10:15.040 ] 00:10:15.040 }' 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:15.040 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.608 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:15.608 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:15.608 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:15.608 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:15.608 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:15.608 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:15.608 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:15.608 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:15.608 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.608 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.608 [2024-11-27 14:09:52.754525] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:15.608 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.608 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:15.608 "name": "Existed_Raid", 00:10:15.608 "aliases": [ 00:10:15.608 "45f906a4-9aab-498e-8445-30a172bbe4cd" 00:10:15.608 ], 00:10:15.608 "product_name": "Raid Volume", 00:10:15.608 "block_size": 512, 00:10:15.608 "num_blocks": 190464, 00:10:15.608 "uuid": "45f906a4-9aab-498e-8445-30a172bbe4cd", 00:10:15.608 "assigned_rate_limits": { 00:10:15.608 "rw_ios_per_sec": 0, 00:10:15.608 "rw_mbytes_per_sec": 0, 00:10:15.608 "r_mbytes_per_sec": 0, 00:10:15.608 "w_mbytes_per_sec": 0 00:10:15.608 }, 00:10:15.608 "claimed": false, 00:10:15.608 "zoned": false, 00:10:15.608 "supported_io_types": { 00:10:15.608 "read": true, 00:10:15.608 "write": true, 00:10:15.608 "unmap": true, 00:10:15.608 "flush": true, 00:10:15.608 "reset": true, 00:10:15.608 "nvme_admin": false, 00:10:15.608 "nvme_io": false, 00:10:15.608 "nvme_io_md": false, 00:10:15.608 "write_zeroes": true, 00:10:15.608 "zcopy": false, 00:10:15.608 "get_zone_info": false, 00:10:15.608 "zone_management": false, 00:10:15.608 "zone_append": false, 00:10:15.608 "compare": false, 00:10:15.608 "compare_and_write": false, 00:10:15.608 "abort": false, 00:10:15.608 "seek_hole": false, 00:10:15.608 "seek_data": false, 00:10:15.608 "copy": false, 00:10:15.608 "nvme_iov_md": false 00:10:15.608 }, 00:10:15.608 "memory_domains": [ 00:10:15.608 { 00:10:15.608 "dma_device_id": "system", 00:10:15.608 "dma_device_type": 1 00:10:15.608 }, 00:10:15.608 { 00:10:15.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.608 "dma_device_type": 2 00:10:15.608 }, 00:10:15.608 { 00:10:15.608 "dma_device_id": "system", 00:10:15.608 "dma_device_type": 1 00:10:15.608 }, 00:10:15.608 { 00:10:15.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.608 "dma_device_type": 2 00:10:15.608 }, 00:10:15.608 { 00:10:15.608 "dma_device_id": "system", 00:10:15.608 "dma_device_type": 1 00:10:15.608 }, 00:10:15.608 { 00:10:15.608 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:15.608 "dma_device_type": 2 00:10:15.608 } 00:10:15.608 ], 00:10:15.608 "driver_specific": { 00:10:15.608 "raid": { 00:10:15.608 "uuid": "45f906a4-9aab-498e-8445-30a172bbe4cd", 00:10:15.608 "strip_size_kb": 64, 00:10:15.608 "state": "online", 00:10:15.608 "raid_level": "raid0", 00:10:15.608 "superblock": true, 00:10:15.608 "num_base_bdevs": 3, 00:10:15.608 "num_base_bdevs_discovered": 3, 00:10:15.608 "num_base_bdevs_operational": 3, 00:10:15.608 "base_bdevs_list": [ 00:10:15.608 { 00:10:15.608 "name": "NewBaseBdev", 00:10:15.608 "uuid": "4a6855e8-9943-4d1c-8e6e-1bfa5c6a57eb", 00:10:15.608 "is_configured": true, 00:10:15.608 "data_offset": 2048, 00:10:15.608 "data_size": 63488 00:10:15.608 }, 00:10:15.608 { 00:10:15.608 "name": "BaseBdev2", 00:10:15.608 "uuid": "1fc470fb-68b4-49da-810e-2b85ed4b815f", 00:10:15.608 "is_configured": true, 00:10:15.608 "data_offset": 2048, 00:10:15.608 "data_size": 63488 00:10:15.608 }, 00:10:15.608 { 00:10:15.608 "name": "BaseBdev3", 00:10:15.608 "uuid": "a4194fde-22c4-4866-9aa5-ad9811f41d2a", 00:10:15.608 "is_configured": true, 00:10:15.608 "data_offset": 2048, 00:10:15.608 "data_size": 63488 00:10:15.608 } 00:10:15.608 ] 00:10:15.608 } 00:10:15.608 } 00:10:15.608 }' 00:10:15.608 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:15.608 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:15.608 BaseBdev2 00:10:15.608 BaseBdev3' 00:10:15.608 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.867 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:15.867 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.867 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:15.867 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.867 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.867 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.867 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.868 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.868 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.868 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.868 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.868 14:09:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:15.868 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.868 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.868 14:09:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.868 14:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.868 14:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.868 14:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:15.868 14:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:15.868 14:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.868 14:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.868 14:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:15.868 14:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.868 14:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:15.868 14:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:15.868 14:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:15.868 14:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.868 14:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:15.868 [2024-11-27 14:09:53.066294] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:15.868 [2024-11-27 14:09:53.066329] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:15.868 [2024-11-27 14:09:53.066428] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:15.868 [2024-11-27 14:09:53.066501] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:15.868 [2024-11-27 14:09:53.066523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:15.868 14:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.868 14:09:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 64339 00:10:15.868 14:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 64339 ']' 00:10:15.868 14:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 64339 00:10:15.868 14:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:15.868 14:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:15.868 14:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64339 00:10:15.868 killing process with pid 64339 00:10:15.868 14:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:15.868 14:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:15.868 14:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64339' 00:10:15.868 14:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 64339 00:10:15.868 [2024-11-27 14:09:53.104575] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:15.868 14:09:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 64339 00:10:16.127 [2024-11-27 14:09:53.369216] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:17.506 14:09:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:17.506 00:10:17.506 real 0m11.902s 00:10:17.506 user 0m19.845s 00:10:17.506 sys 0m1.560s 00:10:17.506 14:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.506 14:09:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:17.507 ************************************ 00:10:17.507 END TEST raid_state_function_test_sb 00:10:17.507 ************************************ 00:10:17.507 14:09:54 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:10:17.507 14:09:54 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:17.507 14:09:54 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.507 14:09:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:17.507 ************************************ 00:10:17.507 START TEST raid_superblock_test 00:10:17.507 ************************************ 00:10:17.507 14:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 3 00:10:17.507 14:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:10:17.507 14:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:17.507 14:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:17.507 14:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:17.507 14:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:17.507 14:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:17.507 14:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:17.507 14:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:17.507 14:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:17.507 14:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:17.507 14:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:17.507 14:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:17.507 14:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:17.507 14:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:10:17.507 14:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:17.507 14:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:17.507 14:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=64975 00:10:17.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.507 14:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 64975 00:10:17.507 14:09:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:17.507 14:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 64975 ']' 00:10:17.507 14:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.507 14:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:17.507 14:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.507 14:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:17.507 14:09:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.507 [2024-11-27 14:09:54.569760] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:10:17.507 [2024-11-27 14:09:54.569953] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid64975 ] 00:10:17.507 [2024-11-27 14:09:54.752067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:17.766 [2024-11-27 14:09:54.878275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.025 [2024-11-27 14:09:55.080425] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:18.025 [2024-11-27 14:09:55.080502] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.595 malloc1 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.595 [2024-11-27 14:09:55.625186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:18.595 [2024-11-27 14:09:55.625420] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.595 [2024-11-27 14:09:55.625497] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:18.595 [2024-11-27 14:09:55.625760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.595 [2024-11-27 14:09:55.628857] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.595 [2024-11-27 14:09:55.629042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:18.595 pt1 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.595 malloc2 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.595 [2024-11-27 14:09:55.678563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:18.595 [2024-11-27 14:09:55.678648] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.595 [2024-11-27 14:09:55.678687] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:18.595 [2024-11-27 14:09:55.678702] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.595 [2024-11-27 14:09:55.681527] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.595 [2024-11-27 14:09:55.681572] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:18.595 pt2 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:18.595 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.596 malloc3 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.596 [2024-11-27 14:09:55.744969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:18.596 [2024-11-27 14:09:55.745039] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:18.596 [2024-11-27 14:09:55.745075] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:18.596 [2024-11-27 14:09:55.745091] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:18.596 [2024-11-27 14:09:55.747904] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:18.596 [2024-11-27 14:09:55.747951] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:18.596 pt3 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.596 [2024-11-27 14:09:55.757065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:18.596 [2024-11-27 14:09:55.759500] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:18.596 [2024-11-27 14:09:55.759764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:18.596 [2024-11-27 14:09:55.760002] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:18.596 [2024-11-27 14:09:55.760042] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:18.596 [2024-11-27 14:09:55.760348] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:18.596 [2024-11-27 14:09:55.760550] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:18.596 [2024-11-27 14:09:55.760565] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:18.596 [2024-11-27 14:09:55.760746] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:18.596 "name": "raid_bdev1", 00:10:18.596 "uuid": "e9117fe2-8cc1-4f75-84a3-ee1e93c8d816", 00:10:18.596 "strip_size_kb": 64, 00:10:18.596 "state": "online", 00:10:18.596 "raid_level": "raid0", 00:10:18.596 "superblock": true, 00:10:18.596 "num_base_bdevs": 3, 00:10:18.596 "num_base_bdevs_discovered": 3, 00:10:18.596 "num_base_bdevs_operational": 3, 00:10:18.596 "base_bdevs_list": [ 00:10:18.596 { 00:10:18.596 "name": "pt1", 00:10:18.596 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:18.596 "is_configured": true, 00:10:18.596 "data_offset": 2048, 00:10:18.596 "data_size": 63488 00:10:18.596 }, 00:10:18.596 { 00:10:18.596 "name": "pt2", 00:10:18.596 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:18.596 "is_configured": true, 00:10:18.596 "data_offset": 2048, 00:10:18.596 "data_size": 63488 00:10:18.596 }, 00:10:18.596 { 00:10:18.596 "name": "pt3", 00:10:18.596 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:18.596 "is_configured": true, 00:10:18.596 "data_offset": 2048, 00:10:18.596 "data_size": 63488 00:10:18.596 } 00:10:18.596 ] 00:10:18.596 }' 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:18.596 14:09:55 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.165 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:19.165 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:19.165 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:19.165 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:19.165 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:19.165 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:19.165 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:19.165 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.165 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:19.165 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.165 [2024-11-27 14:09:56.277536] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:19.165 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.165 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:19.165 "name": "raid_bdev1", 00:10:19.165 "aliases": [ 00:10:19.165 "e9117fe2-8cc1-4f75-84a3-ee1e93c8d816" 00:10:19.165 ], 00:10:19.165 "product_name": "Raid Volume", 00:10:19.165 "block_size": 512, 00:10:19.165 "num_blocks": 190464, 00:10:19.165 "uuid": "e9117fe2-8cc1-4f75-84a3-ee1e93c8d816", 00:10:19.165 "assigned_rate_limits": { 00:10:19.165 "rw_ios_per_sec": 0, 00:10:19.165 "rw_mbytes_per_sec": 0, 00:10:19.165 "r_mbytes_per_sec": 0, 00:10:19.165 "w_mbytes_per_sec": 0 00:10:19.165 }, 00:10:19.165 "claimed": false, 00:10:19.165 "zoned": false, 00:10:19.165 "supported_io_types": { 00:10:19.165 "read": true, 00:10:19.165 "write": true, 00:10:19.165 "unmap": true, 00:10:19.165 "flush": true, 00:10:19.165 "reset": true, 00:10:19.165 "nvme_admin": false, 00:10:19.165 "nvme_io": false, 00:10:19.165 "nvme_io_md": false, 00:10:19.165 "write_zeroes": true, 00:10:19.165 "zcopy": false, 00:10:19.165 "get_zone_info": false, 00:10:19.165 "zone_management": false, 00:10:19.165 "zone_append": false, 00:10:19.165 "compare": false, 00:10:19.165 "compare_and_write": false, 00:10:19.165 "abort": false, 00:10:19.165 "seek_hole": false, 00:10:19.165 "seek_data": false, 00:10:19.165 "copy": false, 00:10:19.165 "nvme_iov_md": false 00:10:19.165 }, 00:10:19.165 "memory_domains": [ 00:10:19.165 { 00:10:19.165 "dma_device_id": "system", 00:10:19.165 "dma_device_type": 1 00:10:19.165 }, 00:10:19.165 { 00:10:19.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.165 "dma_device_type": 2 00:10:19.165 }, 00:10:19.165 { 00:10:19.165 "dma_device_id": "system", 00:10:19.165 "dma_device_type": 1 00:10:19.165 }, 00:10:19.165 { 00:10:19.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.165 "dma_device_type": 2 00:10:19.165 }, 00:10:19.165 { 00:10:19.165 "dma_device_id": "system", 00:10:19.165 "dma_device_type": 1 00:10:19.165 }, 00:10:19.165 { 00:10:19.165 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:19.165 "dma_device_type": 2 00:10:19.165 } 00:10:19.165 ], 00:10:19.165 "driver_specific": { 00:10:19.165 "raid": { 00:10:19.165 "uuid": "e9117fe2-8cc1-4f75-84a3-ee1e93c8d816", 00:10:19.165 "strip_size_kb": 64, 00:10:19.165 "state": "online", 00:10:19.165 "raid_level": "raid0", 00:10:19.165 "superblock": true, 00:10:19.165 "num_base_bdevs": 3, 00:10:19.165 "num_base_bdevs_discovered": 3, 00:10:19.165 "num_base_bdevs_operational": 3, 00:10:19.165 "base_bdevs_list": [ 00:10:19.165 { 00:10:19.165 "name": "pt1", 00:10:19.165 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:19.165 "is_configured": true, 00:10:19.165 "data_offset": 2048, 00:10:19.165 "data_size": 63488 00:10:19.165 }, 00:10:19.165 { 00:10:19.165 "name": "pt2", 00:10:19.165 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.165 "is_configured": true, 00:10:19.165 "data_offset": 2048, 00:10:19.165 "data_size": 63488 00:10:19.165 }, 00:10:19.165 { 00:10:19.165 "name": "pt3", 00:10:19.165 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.165 "is_configured": true, 00:10:19.165 "data_offset": 2048, 00:10:19.165 "data_size": 63488 00:10:19.165 } 00:10:19.165 ] 00:10:19.166 } 00:10:19.166 } 00:10:19.166 }' 00:10:19.166 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:19.166 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:19.166 pt2 00:10:19.166 pt3' 00:10:19.166 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.166 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:19.166 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.166 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:19.166 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.166 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.166 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.436 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.436 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.436 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.436 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.436 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:19.436 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.436 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.436 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.436 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.436 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.436 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.436 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:19.436 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:19.436 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.436 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.436 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:19.436 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.436 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:19.436 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:19.437 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:19.437 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:19.437 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.437 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.437 [2024-11-27 14:09:56.609612] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:19.437 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.437 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e9117fe2-8cc1-4f75-84a3-ee1e93c8d816 00:10:19.437 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e9117fe2-8cc1-4f75-84a3-ee1e93c8d816 ']' 00:10:19.437 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:19.437 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.437 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.437 [2024-11-27 14:09:56.661326] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:19.437 [2024-11-27 14:09:56.661362] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:19.437 [2024-11-27 14:09:56.661456] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:19.437 [2024-11-27 14:09:56.661532] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:19.437 [2024-11-27 14:09:56.661548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:19.437 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.437 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:19.437 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.437 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.437 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.437 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.696 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:19.696 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:19.696 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:19.696 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:19.696 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.696 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.696 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.696 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:19.696 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:19.696 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.696 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.696 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.696 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:19.696 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.697 [2024-11-27 14:09:56.821434] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:19.697 [2024-11-27 14:09:56.824080] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:19.697 [2024-11-27 14:09:56.824155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:19.697 [2024-11-27 14:09:56.824233] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:19.697 [2024-11-27 14:09:56.824308] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:19.697 [2024-11-27 14:09:56.824341] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:19.697 [2024-11-27 14:09:56.824368] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:19.697 [2024-11-27 14:09:56.824384] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:19.697 request: 00:10:19.697 { 00:10:19.697 "name": "raid_bdev1", 00:10:19.697 "raid_level": "raid0", 00:10:19.697 "base_bdevs": [ 00:10:19.697 "malloc1", 00:10:19.697 "malloc2", 00:10:19.697 "malloc3" 00:10:19.697 ], 00:10:19.697 "strip_size_kb": 64, 00:10:19.697 "superblock": false, 00:10:19.697 "method": "bdev_raid_create", 00:10:19.697 "req_id": 1 00:10:19.697 } 00:10:19.697 Got JSON-RPC error response 00:10:19.697 response: 00:10:19.697 { 00:10:19.697 "code": -17, 00:10:19.697 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:19.697 } 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.697 [2024-11-27 14:09:56.889415] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:19.697 [2024-11-27 14:09:56.889682] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.697 [2024-11-27 14:09:56.889759] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:19.697 [2024-11-27 14:09:56.889979] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.697 [2024-11-27 14:09:56.892989] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.697 [2024-11-27 14:09:56.893146] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:19.697 [2024-11-27 14:09:56.893391] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:19.697 [2024-11-27 14:09:56.893578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:19.697 pt1 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:19.697 "name": "raid_bdev1", 00:10:19.697 "uuid": "e9117fe2-8cc1-4f75-84a3-ee1e93c8d816", 00:10:19.697 "strip_size_kb": 64, 00:10:19.697 "state": "configuring", 00:10:19.697 "raid_level": "raid0", 00:10:19.697 "superblock": true, 00:10:19.697 "num_base_bdevs": 3, 00:10:19.697 "num_base_bdevs_discovered": 1, 00:10:19.697 "num_base_bdevs_operational": 3, 00:10:19.697 "base_bdevs_list": [ 00:10:19.697 { 00:10:19.697 "name": "pt1", 00:10:19.697 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:19.697 "is_configured": true, 00:10:19.697 "data_offset": 2048, 00:10:19.697 "data_size": 63488 00:10:19.697 }, 00:10:19.697 { 00:10:19.697 "name": null, 00:10:19.697 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:19.697 "is_configured": false, 00:10:19.697 "data_offset": 2048, 00:10:19.697 "data_size": 63488 00:10:19.697 }, 00:10:19.697 { 00:10:19.697 "name": null, 00:10:19.697 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:19.697 "is_configured": false, 00:10:19.697 "data_offset": 2048, 00:10:19.697 "data_size": 63488 00:10:19.697 } 00:10:19.697 ] 00:10:19.697 }' 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:19.697 14:09:56 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.265 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:10:20.265 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:20.265 14:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.265 14:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.265 [2024-11-27 14:09:57.425693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:20.265 [2024-11-27 14:09:57.425933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.265 [2024-11-27 14:09:57.426102] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:10:20.265 [2024-11-27 14:09:57.426128] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.265 [2024-11-27 14:09:57.426695] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.265 [2024-11-27 14:09:57.426734] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:20.265 [2024-11-27 14:09:57.426868] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:20.265 [2024-11-27 14:09:57.426909] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:20.265 pt2 00:10:20.265 14:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.265 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:10:20.265 14:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.265 14:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.265 [2024-11-27 14:09:57.433713] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:20.265 14:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.265 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:20.265 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.265 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:20.265 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.265 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.265 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.265 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.265 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.265 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.265 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.265 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.265 14:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.265 14:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.265 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.265 14:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.265 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.265 "name": "raid_bdev1", 00:10:20.265 "uuid": "e9117fe2-8cc1-4f75-84a3-ee1e93c8d816", 00:10:20.265 "strip_size_kb": 64, 00:10:20.265 "state": "configuring", 00:10:20.265 "raid_level": "raid0", 00:10:20.265 "superblock": true, 00:10:20.265 "num_base_bdevs": 3, 00:10:20.265 "num_base_bdevs_discovered": 1, 00:10:20.265 "num_base_bdevs_operational": 3, 00:10:20.265 "base_bdevs_list": [ 00:10:20.265 { 00:10:20.265 "name": "pt1", 00:10:20.265 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:20.265 "is_configured": true, 00:10:20.265 "data_offset": 2048, 00:10:20.265 "data_size": 63488 00:10:20.265 }, 00:10:20.265 { 00:10:20.265 "name": null, 00:10:20.265 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.265 "is_configured": false, 00:10:20.265 "data_offset": 0, 00:10:20.265 "data_size": 63488 00:10:20.265 }, 00:10:20.265 { 00:10:20.265 "name": null, 00:10:20.265 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.265 "is_configured": false, 00:10:20.265 "data_offset": 2048, 00:10:20.265 "data_size": 63488 00:10:20.265 } 00:10:20.265 ] 00:10:20.265 }' 00:10:20.265 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.265 14:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.833 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:10:20.833 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:20.833 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:20.833 14:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.833 14:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.833 [2024-11-27 14:09:57.985872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:20.833 [2024-11-27 14:09:57.985961] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.833 [2024-11-27 14:09:57.985991] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:10:20.833 [2024-11-27 14:09:57.986009] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.833 [2024-11-27 14:09:57.986637] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.833 [2024-11-27 14:09:57.986674] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:20.833 [2024-11-27 14:09:57.986774] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:20.834 [2024-11-27 14:09:57.986828] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:20.834 pt2 00:10:20.834 14:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.834 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:20.834 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:20.834 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:20.834 14:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.834 14:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.834 [2024-11-27 14:09:57.993834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:20.834 [2024-11-27 14:09:57.994051] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.834 [2024-11-27 14:09:57.994085] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:10:20.834 [2024-11-27 14:09:57.994103] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.834 [2024-11-27 14:09:57.994650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.834 [2024-11-27 14:09:57.994695] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:20.834 [2024-11-27 14:09:57.994796] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:20.834 [2024-11-27 14:09:57.994831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:20.834 [2024-11-27 14:09:57.994983] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:20.834 [2024-11-27 14:09:57.995009] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:20.834 [2024-11-27 14:09:57.995354] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:10:20.834 [2024-11-27 14:09:57.995564] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:20.834 [2024-11-27 14:09:57.995578] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:10:20.834 [2024-11-27 14:09:57.995751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:20.834 pt3 00:10:20.834 14:09:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.834 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:10:20.834 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:10:20.834 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:20.834 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:20.834 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:20.834 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:20.834 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:20.834 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:20.834 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:20.834 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:20.834 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:20.834 14:09:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:20.834 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:20.834 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:20.834 14:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:20.834 14:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:20.834 14:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:20.834 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:20.834 "name": "raid_bdev1", 00:10:20.834 "uuid": "e9117fe2-8cc1-4f75-84a3-ee1e93c8d816", 00:10:20.834 "strip_size_kb": 64, 00:10:20.834 "state": "online", 00:10:20.834 "raid_level": "raid0", 00:10:20.834 "superblock": true, 00:10:20.834 "num_base_bdevs": 3, 00:10:20.834 "num_base_bdevs_discovered": 3, 00:10:20.834 "num_base_bdevs_operational": 3, 00:10:20.834 "base_bdevs_list": [ 00:10:20.834 { 00:10:20.834 "name": "pt1", 00:10:20.834 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:20.834 "is_configured": true, 00:10:20.834 "data_offset": 2048, 00:10:20.834 "data_size": 63488 00:10:20.834 }, 00:10:20.834 { 00:10:20.834 "name": "pt2", 00:10:20.834 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:20.834 "is_configured": true, 00:10:20.834 "data_offset": 2048, 00:10:20.834 "data_size": 63488 00:10:20.834 }, 00:10:20.834 { 00:10:20.834 "name": "pt3", 00:10:20.834 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:20.834 "is_configured": true, 00:10:20.834 "data_offset": 2048, 00:10:20.834 "data_size": 63488 00:10:20.834 } 00:10:20.834 ] 00:10:20.834 }' 00:10:20.834 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:20.834 14:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.403 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:10:21.403 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:21.403 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:21.403 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:21.403 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:21.403 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:21.403 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:21.403 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:21.403 14:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.403 14:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.403 [2024-11-27 14:09:58.530429] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:21.403 14:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.403 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:21.403 "name": "raid_bdev1", 00:10:21.403 "aliases": [ 00:10:21.403 "e9117fe2-8cc1-4f75-84a3-ee1e93c8d816" 00:10:21.403 ], 00:10:21.403 "product_name": "Raid Volume", 00:10:21.403 "block_size": 512, 00:10:21.403 "num_blocks": 190464, 00:10:21.403 "uuid": "e9117fe2-8cc1-4f75-84a3-ee1e93c8d816", 00:10:21.403 "assigned_rate_limits": { 00:10:21.403 "rw_ios_per_sec": 0, 00:10:21.403 "rw_mbytes_per_sec": 0, 00:10:21.403 "r_mbytes_per_sec": 0, 00:10:21.403 "w_mbytes_per_sec": 0 00:10:21.403 }, 00:10:21.403 "claimed": false, 00:10:21.403 "zoned": false, 00:10:21.403 "supported_io_types": { 00:10:21.403 "read": true, 00:10:21.403 "write": true, 00:10:21.403 "unmap": true, 00:10:21.403 "flush": true, 00:10:21.403 "reset": true, 00:10:21.403 "nvme_admin": false, 00:10:21.403 "nvme_io": false, 00:10:21.403 "nvme_io_md": false, 00:10:21.403 "write_zeroes": true, 00:10:21.403 "zcopy": false, 00:10:21.403 "get_zone_info": false, 00:10:21.403 "zone_management": false, 00:10:21.403 "zone_append": false, 00:10:21.403 "compare": false, 00:10:21.403 "compare_and_write": false, 00:10:21.403 "abort": false, 00:10:21.403 "seek_hole": false, 00:10:21.403 "seek_data": false, 00:10:21.403 "copy": false, 00:10:21.403 "nvme_iov_md": false 00:10:21.403 }, 00:10:21.403 "memory_domains": [ 00:10:21.403 { 00:10:21.403 "dma_device_id": "system", 00:10:21.403 "dma_device_type": 1 00:10:21.403 }, 00:10:21.403 { 00:10:21.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.403 "dma_device_type": 2 00:10:21.403 }, 00:10:21.403 { 00:10:21.403 "dma_device_id": "system", 00:10:21.403 "dma_device_type": 1 00:10:21.403 }, 00:10:21.403 { 00:10:21.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.403 "dma_device_type": 2 00:10:21.403 }, 00:10:21.403 { 00:10:21.403 "dma_device_id": "system", 00:10:21.403 "dma_device_type": 1 00:10:21.403 }, 00:10:21.403 { 00:10:21.403 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:21.403 "dma_device_type": 2 00:10:21.403 } 00:10:21.403 ], 00:10:21.403 "driver_specific": { 00:10:21.403 "raid": { 00:10:21.403 "uuid": "e9117fe2-8cc1-4f75-84a3-ee1e93c8d816", 00:10:21.403 "strip_size_kb": 64, 00:10:21.403 "state": "online", 00:10:21.403 "raid_level": "raid0", 00:10:21.403 "superblock": true, 00:10:21.403 "num_base_bdevs": 3, 00:10:21.403 "num_base_bdevs_discovered": 3, 00:10:21.403 "num_base_bdevs_operational": 3, 00:10:21.403 "base_bdevs_list": [ 00:10:21.403 { 00:10:21.403 "name": "pt1", 00:10:21.403 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:21.403 "is_configured": true, 00:10:21.403 "data_offset": 2048, 00:10:21.403 "data_size": 63488 00:10:21.403 }, 00:10:21.403 { 00:10:21.403 "name": "pt2", 00:10:21.403 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:21.403 "is_configured": true, 00:10:21.403 "data_offset": 2048, 00:10:21.403 "data_size": 63488 00:10:21.403 }, 00:10:21.403 { 00:10:21.403 "name": "pt3", 00:10:21.403 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:21.403 "is_configured": true, 00:10:21.403 "data_offset": 2048, 00:10:21.403 "data_size": 63488 00:10:21.403 } 00:10:21.403 ] 00:10:21.403 } 00:10:21.403 } 00:10:21.403 }' 00:10:21.403 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:21.403 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:21.403 pt2 00:10:21.403 pt3' 00:10:21.403 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:10:21.662 [2024-11-27 14:09:58.858531] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e9117fe2-8cc1-4f75-84a3-ee1e93c8d816 '!=' e9117fe2-8cc1-4f75-84a3-ee1e93c8d816 ']' 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 64975 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 64975 ']' 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 64975 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 64975 00:10:21.662 killing process with pid 64975 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 64975' 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 64975 00:10:21.662 [2024-11-27 14:09:58.935618] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:21.662 14:09:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 64975 00:10:21.662 [2024-11-27 14:09:58.935735] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:21.662 [2024-11-27 14:09:58.935827] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:21.662 [2024-11-27 14:09:58.935866] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:10:21.921 [2024-11-27 14:09:59.192951] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:23.297 14:10:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:10:23.297 00:10:23.297 real 0m5.749s 00:10:23.297 user 0m8.705s 00:10:23.297 sys 0m0.873s 00:10:23.297 14:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:23.297 ************************************ 00:10:23.297 END TEST raid_superblock_test 00:10:23.297 ************************************ 00:10:23.297 14:10:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.297 14:10:00 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:10:23.297 14:10:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:23.297 14:10:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:23.297 14:10:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:23.297 ************************************ 00:10:23.297 START TEST raid_read_error_test 00:10:23.297 ************************************ 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 read 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.YEAmPShBGP 00:10:23.297 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65229 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65229 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 65229 ']' 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:23.297 14:10:00 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.297 [2024-11-27 14:10:00.395122] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:10:23.297 [2024-11-27 14:10:00.395327] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65229 ] 00:10:23.555 [2024-11-27 14:10:00.581485] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:23.555 [2024-11-27 14:10:00.713181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.814 [2024-11-27 14:10:00.923381] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:23.814 [2024-11-27 14:10:00.923695] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.382 BaseBdev1_malloc 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.382 true 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.382 [2024-11-27 14:10:01.436243] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:24.382 [2024-11-27 14:10:01.436367] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.382 [2024-11-27 14:10:01.436400] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:24.382 [2024-11-27 14:10:01.436432] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.382 [2024-11-27 14:10:01.439463] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.382 [2024-11-27 14:10:01.439725] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:24.382 BaseBdev1 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.382 BaseBdev2_malloc 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.382 true 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.382 [2024-11-27 14:10:01.503404] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:24.382 [2024-11-27 14:10:01.503736] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.382 [2024-11-27 14:10:01.503793] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:24.382 [2024-11-27 14:10:01.503815] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.382 [2024-11-27 14:10:01.506917] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.382 [2024-11-27 14:10:01.506995] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:24.382 BaseBdev2 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.382 BaseBdev3_malloc 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.382 true 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.382 [2024-11-27 14:10:01.582418] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:24.382 [2024-11-27 14:10:01.582537] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:24.382 [2024-11-27 14:10:01.582594] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:24.382 [2024-11-27 14:10:01.582618] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:24.382 [2024-11-27 14:10:01.585651] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:24.382 [2024-11-27 14:10:01.585715] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:24.382 BaseBdev3 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.382 [2024-11-27 14:10:01.594738] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:24.382 [2024-11-27 14:10:01.597265] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:24.382 [2024-11-27 14:10:01.597363] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:24.382 [2024-11-27 14:10:01.597628] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:24.382 [2024-11-27 14:10:01.597649] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:24.382 [2024-11-27 14:10:01.598042] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:24.382 [2024-11-27 14:10:01.598281] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:24.382 [2024-11-27 14:10:01.598303] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:24.382 [2024-11-27 14:10:01.598604] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.382 14:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:24.383 14:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:24.383 14:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:24.383 14:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:24.383 14:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:24.383 14:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:24.383 14:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:24.383 14:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:24.383 14:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:24.383 14:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:24.383 14:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:24.383 14:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:24.383 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.383 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.383 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.383 14:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:24.383 "name": "raid_bdev1", 00:10:24.383 "uuid": "5bb3c7ab-0862-4859-88e2-85a52b26f1cd", 00:10:24.383 "strip_size_kb": 64, 00:10:24.383 "state": "online", 00:10:24.383 "raid_level": "raid0", 00:10:24.383 "superblock": true, 00:10:24.383 "num_base_bdevs": 3, 00:10:24.383 "num_base_bdevs_discovered": 3, 00:10:24.383 "num_base_bdevs_operational": 3, 00:10:24.383 "base_bdevs_list": [ 00:10:24.383 { 00:10:24.383 "name": "BaseBdev1", 00:10:24.383 "uuid": "e53a83ef-3ad9-55e8-879c-a0546d64bdee", 00:10:24.383 "is_configured": true, 00:10:24.383 "data_offset": 2048, 00:10:24.383 "data_size": 63488 00:10:24.383 }, 00:10:24.383 { 00:10:24.383 "name": "BaseBdev2", 00:10:24.383 "uuid": "bf7c8349-85dd-57b0-b5d3-df91175b65fd", 00:10:24.383 "is_configured": true, 00:10:24.383 "data_offset": 2048, 00:10:24.383 "data_size": 63488 00:10:24.383 }, 00:10:24.383 { 00:10:24.383 "name": "BaseBdev3", 00:10:24.383 "uuid": "6e8fef7e-b9af-5ed0-ad80-dd3bc5b8d9b7", 00:10:24.383 "is_configured": true, 00:10:24.383 "data_offset": 2048, 00:10:24.383 "data_size": 63488 00:10:24.383 } 00:10:24.383 ] 00:10:24.383 }' 00:10:24.383 14:10:01 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:24.383 14:10:01 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.951 14:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:24.951 14:10:02 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:25.211 [2024-11-27 14:10:02.256307] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:26.150 14:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:26.150 14:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.150 14:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.150 14:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.150 14:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:26.150 14:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:26.150 14:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:26.150 14:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:26.150 14:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:26.150 14:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:26.150 14:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:26.150 14:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:26.150 14:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:26.150 14:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:26.150 14:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:26.150 14:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:26.150 14:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:26.150 14:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:26.150 14:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:26.150 14:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.150 14:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.150 14:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.150 14:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:26.150 "name": "raid_bdev1", 00:10:26.150 "uuid": "5bb3c7ab-0862-4859-88e2-85a52b26f1cd", 00:10:26.150 "strip_size_kb": 64, 00:10:26.150 "state": "online", 00:10:26.150 "raid_level": "raid0", 00:10:26.150 "superblock": true, 00:10:26.150 "num_base_bdevs": 3, 00:10:26.150 "num_base_bdevs_discovered": 3, 00:10:26.150 "num_base_bdevs_operational": 3, 00:10:26.150 "base_bdevs_list": [ 00:10:26.150 { 00:10:26.150 "name": "BaseBdev1", 00:10:26.150 "uuid": "e53a83ef-3ad9-55e8-879c-a0546d64bdee", 00:10:26.150 "is_configured": true, 00:10:26.150 "data_offset": 2048, 00:10:26.150 "data_size": 63488 00:10:26.150 }, 00:10:26.150 { 00:10:26.150 "name": "BaseBdev2", 00:10:26.150 "uuid": "bf7c8349-85dd-57b0-b5d3-df91175b65fd", 00:10:26.150 "is_configured": true, 00:10:26.150 "data_offset": 2048, 00:10:26.150 "data_size": 63488 00:10:26.150 }, 00:10:26.150 { 00:10:26.150 "name": "BaseBdev3", 00:10:26.150 "uuid": "6e8fef7e-b9af-5ed0-ad80-dd3bc5b8d9b7", 00:10:26.150 "is_configured": true, 00:10:26.150 "data_offset": 2048, 00:10:26.150 "data_size": 63488 00:10:26.150 } 00:10:26.150 ] 00:10:26.150 }' 00:10:26.150 14:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:26.150 14:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.410 14:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:26.410 14:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:26.410 14:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:26.410 [2024-11-27 14:10:03.679657] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:26.410 [2024-11-27 14:10:03.679879] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:26.410 [2024-11-27 14:10:03.683715] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:26.410 [2024-11-27 14:10:03.683970] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:26.410 { 00:10:26.410 "results": [ 00:10:26.410 { 00:10:26.410 "job": "raid_bdev1", 00:10:26.410 "core_mask": "0x1", 00:10:26.410 "workload": "randrw", 00:10:26.410 "percentage": 50, 00:10:26.410 "status": "finished", 00:10:26.410 "queue_depth": 1, 00:10:26.410 "io_size": 131072, 00:10:26.410 "runtime": 1.421214, 00:10:26.410 "iops": 10187.065424348479, 00:10:26.410 "mibps": 1273.3831780435598, 00:10:26.410 "io_failed": 1, 00:10:26.410 "io_timeout": 0, 00:10:26.410 "avg_latency_us": 136.9710585236298, 00:10:26.410 "min_latency_us": 37.70181818181818, 00:10:26.410 "max_latency_us": 1921.3963636363637 00:10:26.410 } 00:10:26.410 ], 00:10:26.410 "core_count": 1 00:10:26.410 } 00:10:26.410 [2024-11-27 14:10:03.684186] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:26.410 [2024-11-27 14:10:03.684213] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:26.669 14:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:26.669 14:10:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65229 00:10:26.669 14:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 65229 ']' 00:10:26.669 14:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 65229 00:10:26.669 14:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:10:26.669 14:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:26.669 14:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65229 00:10:26.669 killing process with pid 65229 00:10:26.669 14:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:26.669 14:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:26.669 14:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65229' 00:10:26.669 14:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 65229 00:10:26.669 [2024-11-27 14:10:03.724168] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:26.669 14:10:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 65229 00:10:26.669 [2024-11-27 14:10:03.932954] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:28.046 14:10:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.YEAmPShBGP 00:10:28.046 14:10:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:28.046 14:10:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:28.046 14:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:10:28.046 14:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:28.046 ************************************ 00:10:28.046 END TEST raid_read_error_test 00:10:28.046 ************************************ 00:10:28.046 14:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:28.046 14:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:28.046 14:10:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:10:28.046 00:10:28.046 real 0m4.736s 00:10:28.046 user 0m5.894s 00:10:28.046 sys 0m0.611s 00:10:28.046 14:10:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:28.046 14:10:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.046 14:10:05 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:10:28.046 14:10:05 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:28.046 14:10:05 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:28.046 14:10:05 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:28.046 ************************************ 00:10:28.046 START TEST raid_write_error_test 00:10:28.046 ************************************ 00:10:28.046 14:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 3 write 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:10:28.047 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.CAmc9oLQty 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=65380 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 65380 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 65380 ']' 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:28.047 14:10:05 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.047 [2024-11-27 14:10:05.187872] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:10:28.047 [2024-11-27 14:10:05.188058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65380 ] 00:10:28.306 [2024-11-27 14:10:05.373115] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:28.306 [2024-11-27 14:10:05.501277] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.564 [2024-11-27 14:10:05.706236] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:28.564 [2024-11-27 14:10:05.706318] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:29.134 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:29.134 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:10:29.134 14:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:29.134 14:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:29.134 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.134 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.134 BaseBdev1_malloc 00:10:29.134 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.134 14:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:10:29.134 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.134 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.134 true 00:10:29.134 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.134 14:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:29.134 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.134 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.134 [2024-11-27 14:10:06.288326] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:29.134 [2024-11-27 14:10:06.288402] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.134 [2024-11-27 14:10:06.288436] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:10:29.134 [2024-11-27 14:10:06.288455] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.134 [2024-11-27 14:10:06.291662] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.134 [2024-11-27 14:10:06.291948] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:29.134 BaseBdev1 00:10:29.134 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.134 14:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:29.134 14:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:29.134 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.134 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.134 BaseBdev2_malloc 00:10:29.134 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.134 14:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:10:29.134 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.134 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.134 true 00:10:29.134 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.134 14:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:29.134 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.134 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.134 [2024-11-27 14:10:06.355140] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:29.134 [2024-11-27 14:10:06.355418] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.134 [2024-11-27 14:10:06.355457] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:10:29.134 [2024-11-27 14:10:06.355477] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.134 [2024-11-27 14:10:06.358835] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.134 [2024-11-27 14:10:06.358884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:29.134 BaseBdev2 00:10:29.134 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.134 14:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:10:29.134 14:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:29.134 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.134 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.394 BaseBdev3_malloc 00:10:29.394 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.394 14:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:10:29.394 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.394 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.394 true 00:10:29.394 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.394 14:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:29.394 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.394 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.394 [2024-11-27 14:10:06.438492] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:29.394 [2024-11-27 14:10:06.438619] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:29.394 [2024-11-27 14:10:06.438653] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:10:29.394 [2024-11-27 14:10:06.438688] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:29.394 [2024-11-27 14:10:06.441681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:29.394 [2024-11-27 14:10:06.441735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:29.394 BaseBdev3 00:10:29.394 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.394 14:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:10:29.394 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.394 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.394 [2024-11-27 14:10:06.450753] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:29.394 [2024-11-27 14:10:06.453299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:29.394 [2024-11-27 14:10:06.453401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:29.394 [2024-11-27 14:10:06.453652] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:29.394 [2024-11-27 14:10:06.453672] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:29.394 [2024-11-27 14:10:06.454073] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:10:29.394 [2024-11-27 14:10:06.454362] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:29.394 [2024-11-27 14:10:06.454385] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:10:29.394 [2024-11-27 14:10:06.454689] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:29.394 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.394 14:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:29.394 14:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:29.394 14:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:29.394 14:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:29.394 14:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:29.394 14:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:29.394 14:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:29.394 14:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:29.394 14:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:29.394 14:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:29.394 14:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:29.394 14:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:29.394 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:29.394 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.394 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:29.394 14:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:29.394 "name": "raid_bdev1", 00:10:29.394 "uuid": "8f6f8688-eed3-4168-853b-2ecb3bb3fe75", 00:10:29.394 "strip_size_kb": 64, 00:10:29.394 "state": "online", 00:10:29.394 "raid_level": "raid0", 00:10:29.394 "superblock": true, 00:10:29.394 "num_base_bdevs": 3, 00:10:29.394 "num_base_bdevs_discovered": 3, 00:10:29.394 "num_base_bdevs_operational": 3, 00:10:29.394 "base_bdevs_list": [ 00:10:29.394 { 00:10:29.394 "name": "BaseBdev1", 00:10:29.394 "uuid": "08185a7e-278a-5ef4-85c4-231c3fb67b73", 00:10:29.394 "is_configured": true, 00:10:29.394 "data_offset": 2048, 00:10:29.394 "data_size": 63488 00:10:29.394 }, 00:10:29.394 { 00:10:29.394 "name": "BaseBdev2", 00:10:29.394 "uuid": "c6c50e0b-9b83-5a47-9d3b-cc2d05220d34", 00:10:29.394 "is_configured": true, 00:10:29.394 "data_offset": 2048, 00:10:29.394 "data_size": 63488 00:10:29.394 }, 00:10:29.394 { 00:10:29.394 "name": "BaseBdev3", 00:10:29.394 "uuid": "012676f6-1511-5155-8a9a-9916dfe9c54b", 00:10:29.394 "is_configured": true, 00:10:29.394 "data_offset": 2048, 00:10:29.394 "data_size": 63488 00:10:29.394 } 00:10:29.394 ] 00:10:29.394 }' 00:10:29.394 14:10:06 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:29.394 14:10:06 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:29.962 14:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:10:29.962 14:10:07 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:10:29.962 [2024-11-27 14:10:07.216375] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:10:30.898 14:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:30.898 14:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.898 14:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.898 14:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.898 14:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:10:30.898 14:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:30.898 14:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:10:30.898 14:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:30.898 14:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:30.898 14:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:30.898 14:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:10:30.898 14:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:30.898 14:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:30.898 14:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:30.898 14:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:30.898 14:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:30.898 14:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:30.898 14:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:30.898 14:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:30.898 14:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.898 14:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.898 14:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.898 14:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:30.898 "name": "raid_bdev1", 00:10:30.898 "uuid": "8f6f8688-eed3-4168-853b-2ecb3bb3fe75", 00:10:30.898 "strip_size_kb": 64, 00:10:30.898 "state": "online", 00:10:30.898 "raid_level": "raid0", 00:10:30.898 "superblock": true, 00:10:30.898 "num_base_bdevs": 3, 00:10:30.898 "num_base_bdevs_discovered": 3, 00:10:30.898 "num_base_bdevs_operational": 3, 00:10:30.898 "base_bdevs_list": [ 00:10:30.898 { 00:10:30.898 "name": "BaseBdev1", 00:10:30.898 "uuid": "08185a7e-278a-5ef4-85c4-231c3fb67b73", 00:10:30.898 "is_configured": true, 00:10:30.898 "data_offset": 2048, 00:10:30.898 "data_size": 63488 00:10:30.898 }, 00:10:30.898 { 00:10:30.898 "name": "BaseBdev2", 00:10:30.898 "uuid": "c6c50e0b-9b83-5a47-9d3b-cc2d05220d34", 00:10:30.898 "is_configured": true, 00:10:30.899 "data_offset": 2048, 00:10:30.899 "data_size": 63488 00:10:30.899 }, 00:10:30.899 { 00:10:30.899 "name": "BaseBdev3", 00:10:30.899 "uuid": "012676f6-1511-5155-8a9a-9916dfe9c54b", 00:10:30.899 "is_configured": true, 00:10:30.899 "data_offset": 2048, 00:10:30.899 "data_size": 63488 00:10:30.899 } 00:10:30.899 ] 00:10:30.899 }' 00:10:30.899 14:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:30.899 14:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.467 14:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:31.467 14:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:31.467 14:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.467 [2024-11-27 14:10:08.628805] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:31.467 [2024-11-27 14:10:08.628855] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:31.467 [2024-11-27 14:10:08.632438] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:31.467 [2024-11-27 14:10:08.632510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:31.467 [2024-11-27 14:10:08.632563] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:31.467 [2024-11-27 14:10:08.632578] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:10:31.467 { 00:10:31.467 "results": [ 00:10:31.467 { 00:10:31.467 "job": "raid_bdev1", 00:10:31.467 "core_mask": "0x1", 00:10:31.467 "workload": "randrw", 00:10:31.467 "percentage": 50, 00:10:31.467 "status": "finished", 00:10:31.467 "queue_depth": 1, 00:10:31.467 "io_size": 131072, 00:10:31.467 "runtime": 1.40961, 00:10:31.467 "iops": 10270.926000808735, 00:10:31.467 "mibps": 1283.8657501010919, 00:10:31.467 "io_failed": 1, 00:10:31.467 "io_timeout": 0, 00:10:31.467 "avg_latency_us": 135.4640132103548, 00:10:31.467 "min_latency_us": 38.4, 00:10:31.467 "max_latency_us": 1951.1854545454546 00:10:31.467 } 00:10:31.467 ], 00:10:31.467 "core_count": 1 00:10:31.467 } 00:10:31.467 14:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:31.467 14:10:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 65380 00:10:31.467 14:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 65380 ']' 00:10:31.467 14:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 65380 00:10:31.467 14:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:10:31.467 14:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:31.467 14:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65380 00:10:31.467 killing process with pid 65380 00:10:31.467 14:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:31.467 14:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:31.467 14:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65380' 00:10:31.467 14:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 65380 00:10:31.467 14:10:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 65380 00:10:31.467 [2024-11-27 14:10:08.675302] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:31.727 [2024-11-27 14:10:08.882479] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:33.106 14:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.CAmc9oLQty 00:10:33.106 14:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:10:33.106 14:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:10:33.106 14:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:10:33.106 14:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:10:33.106 14:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:33.106 14:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:33.106 14:10:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:10:33.106 00:10:33.106 real 0m4.933s 00:10:33.106 user 0m6.247s 00:10:33.106 sys 0m0.608s 00:10:33.106 ************************************ 00:10:33.106 END TEST raid_write_error_test 00:10:33.106 ************************************ 00:10:33.106 14:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.106 14:10:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.106 14:10:10 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:10:33.106 14:10:10 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:10:33.106 14:10:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:33.106 14:10:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.106 14:10:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:33.106 ************************************ 00:10:33.106 START TEST raid_state_function_test 00:10:33.106 ************************************ 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 false 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:10:33.106 Process raid pid: 65524 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=65524 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 65524' 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 65524 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 65524 ']' 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.106 14:10:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.106 [2024-11-27 14:10:10.151424] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:10:33.106 [2024-11-27 14:10:10.151896] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:33.106 [2024-11-27 14:10:10.323063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:33.364 [2024-11-27 14:10:10.456685] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.622 [2024-11-27 14:10:10.664272] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:33.622 [2024-11-27 14:10:10.664334] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:34.189 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:34.189 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:10:34.189 14:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:34.189 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.189 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.189 [2024-11-27 14:10:11.210784] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:34.189 [2024-11-27 14:10:11.210878] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:34.189 [2024-11-27 14:10:11.210896] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:34.189 [2024-11-27 14:10:11.210927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:34.189 [2024-11-27 14:10:11.210938] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:34.189 [2024-11-27 14:10:11.210952] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:34.189 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.189 14:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:34.189 14:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.189 14:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.189 14:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:34.189 14:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.189 14:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:34.189 14:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.189 14:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.189 14:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.189 14:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.190 14:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.190 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.190 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.190 14:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.190 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.190 14:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.190 "name": "Existed_Raid", 00:10:34.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.190 "strip_size_kb": 64, 00:10:34.190 "state": "configuring", 00:10:34.190 "raid_level": "concat", 00:10:34.190 "superblock": false, 00:10:34.190 "num_base_bdevs": 3, 00:10:34.190 "num_base_bdevs_discovered": 0, 00:10:34.190 "num_base_bdevs_operational": 3, 00:10:34.190 "base_bdevs_list": [ 00:10:34.190 { 00:10:34.190 "name": "BaseBdev1", 00:10:34.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.190 "is_configured": false, 00:10:34.190 "data_offset": 0, 00:10:34.190 "data_size": 0 00:10:34.190 }, 00:10:34.190 { 00:10:34.190 "name": "BaseBdev2", 00:10:34.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.190 "is_configured": false, 00:10:34.190 "data_offset": 0, 00:10:34.190 "data_size": 0 00:10:34.190 }, 00:10:34.190 { 00:10:34.190 "name": "BaseBdev3", 00:10:34.190 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.190 "is_configured": false, 00:10:34.190 "data_offset": 0, 00:10:34.190 "data_size": 0 00:10:34.190 } 00:10:34.190 ] 00:10:34.190 }' 00:10:34.190 14:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.190 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.757 14:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:34.757 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.757 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.757 [2024-11-27 14:10:11.730863] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:34.757 [2024-11-27 14:10:11.730916] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:34.757 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.757 14:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:34.757 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.757 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.757 [2024-11-27 14:10:11.742892] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:34.757 [2024-11-27 14:10:11.743018] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:34.757 [2024-11-27 14:10:11.743034] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:34.757 [2024-11-27 14:10:11.743049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:34.757 [2024-11-27 14:10:11.743058] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:34.757 [2024-11-27 14:10:11.743072] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:34.757 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.757 14:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:34.757 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.757 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.757 [2024-11-27 14:10:11.789555] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:34.757 BaseBdev1 00:10:34.757 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.757 14:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:34.757 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:34.757 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:34.757 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:34.757 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:34.757 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:34.757 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:34.757 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.757 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.757 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.757 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:34.757 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.757 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.757 [ 00:10:34.757 { 00:10:34.757 "name": "BaseBdev1", 00:10:34.757 "aliases": [ 00:10:34.757 "7f61a145-8d10-451d-ad0d-8028cbecbe5e" 00:10:34.757 ], 00:10:34.757 "product_name": "Malloc disk", 00:10:34.757 "block_size": 512, 00:10:34.757 "num_blocks": 65536, 00:10:34.757 "uuid": "7f61a145-8d10-451d-ad0d-8028cbecbe5e", 00:10:34.757 "assigned_rate_limits": { 00:10:34.757 "rw_ios_per_sec": 0, 00:10:34.757 "rw_mbytes_per_sec": 0, 00:10:34.757 "r_mbytes_per_sec": 0, 00:10:34.757 "w_mbytes_per_sec": 0 00:10:34.757 }, 00:10:34.757 "claimed": true, 00:10:34.757 "claim_type": "exclusive_write", 00:10:34.757 "zoned": false, 00:10:34.757 "supported_io_types": { 00:10:34.757 "read": true, 00:10:34.757 "write": true, 00:10:34.757 "unmap": true, 00:10:34.758 "flush": true, 00:10:34.758 "reset": true, 00:10:34.758 "nvme_admin": false, 00:10:34.758 "nvme_io": false, 00:10:34.758 "nvme_io_md": false, 00:10:34.758 "write_zeroes": true, 00:10:34.758 "zcopy": true, 00:10:34.758 "get_zone_info": false, 00:10:34.758 "zone_management": false, 00:10:34.758 "zone_append": false, 00:10:34.758 "compare": false, 00:10:34.758 "compare_and_write": false, 00:10:34.758 "abort": true, 00:10:34.758 "seek_hole": false, 00:10:34.758 "seek_data": false, 00:10:34.758 "copy": true, 00:10:34.758 "nvme_iov_md": false 00:10:34.758 }, 00:10:34.758 "memory_domains": [ 00:10:34.758 { 00:10:34.758 "dma_device_id": "system", 00:10:34.758 "dma_device_type": 1 00:10:34.758 }, 00:10:34.758 { 00:10:34.758 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:34.758 "dma_device_type": 2 00:10:34.758 } 00:10:34.758 ], 00:10:34.758 "driver_specific": {} 00:10:34.758 } 00:10:34.758 ] 00:10:34.758 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.758 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:34.758 14:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:34.758 14:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:34.758 14:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:34.758 14:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:34.758 14:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:34.758 14:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:34.758 14:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:34.758 14:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:34.758 14:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:34.758 14:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:34.758 14:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:34.758 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.758 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:34.758 14:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:34.758 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.758 14:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:34.758 "name": "Existed_Raid", 00:10:34.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.758 "strip_size_kb": 64, 00:10:34.758 "state": "configuring", 00:10:34.758 "raid_level": "concat", 00:10:34.758 "superblock": false, 00:10:34.758 "num_base_bdevs": 3, 00:10:34.758 "num_base_bdevs_discovered": 1, 00:10:34.758 "num_base_bdevs_operational": 3, 00:10:34.758 "base_bdevs_list": [ 00:10:34.758 { 00:10:34.758 "name": "BaseBdev1", 00:10:34.758 "uuid": "7f61a145-8d10-451d-ad0d-8028cbecbe5e", 00:10:34.758 "is_configured": true, 00:10:34.758 "data_offset": 0, 00:10:34.758 "data_size": 65536 00:10:34.758 }, 00:10:34.758 { 00:10:34.758 "name": "BaseBdev2", 00:10:34.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.758 "is_configured": false, 00:10:34.758 "data_offset": 0, 00:10:34.758 "data_size": 0 00:10:34.758 }, 00:10:34.758 { 00:10:34.758 "name": "BaseBdev3", 00:10:34.758 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:34.758 "is_configured": false, 00:10:34.758 "data_offset": 0, 00:10:34.758 "data_size": 0 00:10:34.758 } 00:10:34.758 ] 00:10:34.758 }' 00:10:34.758 14:10:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:34.758 14:10:11 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.325 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:35.325 14:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.325 14:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.325 [2024-11-27 14:10:12.345731] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:35.325 [2024-11-27 14:10:12.345840] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:35.325 14:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.326 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:35.326 14:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.326 14:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.326 [2024-11-27 14:10:12.357863] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:35.326 [2024-11-27 14:10:12.360497] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:35.326 [2024-11-27 14:10:12.360570] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:35.326 [2024-11-27 14:10:12.360602] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:35.326 [2024-11-27 14:10:12.360616] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:35.326 14:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.326 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:35.326 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:35.326 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:35.326 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.326 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.326 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:35.326 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.326 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:35.326 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.326 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.326 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.326 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.326 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.326 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.326 14:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.326 14:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.326 14:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.326 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.326 "name": "Existed_Raid", 00:10:35.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.326 "strip_size_kb": 64, 00:10:35.326 "state": "configuring", 00:10:35.326 "raid_level": "concat", 00:10:35.326 "superblock": false, 00:10:35.326 "num_base_bdevs": 3, 00:10:35.326 "num_base_bdevs_discovered": 1, 00:10:35.326 "num_base_bdevs_operational": 3, 00:10:35.326 "base_bdevs_list": [ 00:10:35.326 { 00:10:35.326 "name": "BaseBdev1", 00:10:35.326 "uuid": "7f61a145-8d10-451d-ad0d-8028cbecbe5e", 00:10:35.326 "is_configured": true, 00:10:35.326 "data_offset": 0, 00:10:35.326 "data_size": 65536 00:10:35.326 }, 00:10:35.326 { 00:10:35.326 "name": "BaseBdev2", 00:10:35.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.326 "is_configured": false, 00:10:35.326 "data_offset": 0, 00:10:35.326 "data_size": 0 00:10:35.326 }, 00:10:35.326 { 00:10:35.326 "name": "BaseBdev3", 00:10:35.326 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.326 "is_configured": false, 00:10:35.326 "data_offset": 0, 00:10:35.326 "data_size": 0 00:10:35.326 } 00:10:35.326 ] 00:10:35.326 }' 00:10:35.326 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.326 14:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.893 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.894 [2024-11-27 14:10:12.906379] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:35.894 BaseBdev2 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.894 [ 00:10:35.894 { 00:10:35.894 "name": "BaseBdev2", 00:10:35.894 "aliases": [ 00:10:35.894 "bcb1133d-ce40-44ce-a0a1-0243a5315334" 00:10:35.894 ], 00:10:35.894 "product_name": "Malloc disk", 00:10:35.894 "block_size": 512, 00:10:35.894 "num_blocks": 65536, 00:10:35.894 "uuid": "bcb1133d-ce40-44ce-a0a1-0243a5315334", 00:10:35.894 "assigned_rate_limits": { 00:10:35.894 "rw_ios_per_sec": 0, 00:10:35.894 "rw_mbytes_per_sec": 0, 00:10:35.894 "r_mbytes_per_sec": 0, 00:10:35.894 "w_mbytes_per_sec": 0 00:10:35.894 }, 00:10:35.894 "claimed": true, 00:10:35.894 "claim_type": "exclusive_write", 00:10:35.894 "zoned": false, 00:10:35.894 "supported_io_types": { 00:10:35.894 "read": true, 00:10:35.894 "write": true, 00:10:35.894 "unmap": true, 00:10:35.894 "flush": true, 00:10:35.894 "reset": true, 00:10:35.894 "nvme_admin": false, 00:10:35.894 "nvme_io": false, 00:10:35.894 "nvme_io_md": false, 00:10:35.894 "write_zeroes": true, 00:10:35.894 "zcopy": true, 00:10:35.894 "get_zone_info": false, 00:10:35.894 "zone_management": false, 00:10:35.894 "zone_append": false, 00:10:35.894 "compare": false, 00:10:35.894 "compare_and_write": false, 00:10:35.894 "abort": true, 00:10:35.894 "seek_hole": false, 00:10:35.894 "seek_data": false, 00:10:35.894 "copy": true, 00:10:35.894 "nvme_iov_md": false 00:10:35.894 }, 00:10:35.894 "memory_domains": [ 00:10:35.894 { 00:10:35.894 "dma_device_id": "system", 00:10:35.894 "dma_device_type": 1 00:10:35.894 }, 00:10:35.894 { 00:10:35.894 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.894 "dma_device_type": 2 00:10:35.894 } 00:10:35.894 ], 00:10:35.894 "driver_specific": {} 00:10:35.894 } 00:10:35.894 ] 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:35.894 "name": "Existed_Raid", 00:10:35.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.894 "strip_size_kb": 64, 00:10:35.894 "state": "configuring", 00:10:35.894 "raid_level": "concat", 00:10:35.894 "superblock": false, 00:10:35.894 "num_base_bdevs": 3, 00:10:35.894 "num_base_bdevs_discovered": 2, 00:10:35.894 "num_base_bdevs_operational": 3, 00:10:35.894 "base_bdevs_list": [ 00:10:35.894 { 00:10:35.894 "name": "BaseBdev1", 00:10:35.894 "uuid": "7f61a145-8d10-451d-ad0d-8028cbecbe5e", 00:10:35.894 "is_configured": true, 00:10:35.894 "data_offset": 0, 00:10:35.894 "data_size": 65536 00:10:35.894 }, 00:10:35.894 { 00:10:35.894 "name": "BaseBdev2", 00:10:35.894 "uuid": "bcb1133d-ce40-44ce-a0a1-0243a5315334", 00:10:35.894 "is_configured": true, 00:10:35.894 "data_offset": 0, 00:10:35.894 "data_size": 65536 00:10:35.894 }, 00:10:35.894 { 00:10:35.894 "name": "BaseBdev3", 00:10:35.894 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:35.894 "is_configured": false, 00:10:35.894 "data_offset": 0, 00:10:35.894 "data_size": 0 00:10:35.894 } 00:10:35.894 ] 00:10:35.894 }' 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:35.894 14:10:12 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.462 14:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:36.462 14:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.462 14:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.462 [2024-11-27 14:10:13.477164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:36.462 [2024-11-27 14:10:13.477254] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:36.462 [2024-11-27 14:10:13.477287] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:36.462 [2024-11-27 14:10:13.477727] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:36.462 [2024-11-27 14:10:13.478009] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:36.462 [2024-11-27 14:10:13.478038] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:36.462 [2024-11-27 14:10:13.478400] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:36.462 BaseBdev3 00:10:36.462 14:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.462 14:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:36.462 14:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:36.462 14:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:36.463 14:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:36.463 14:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:36.463 14:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:36.463 14:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:36.463 14:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.463 14:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.463 14:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.463 14:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:36.463 14:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.463 14:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.463 [ 00:10:36.463 { 00:10:36.463 "name": "BaseBdev3", 00:10:36.463 "aliases": [ 00:10:36.463 "97e600a5-4d9d-443c-b3e5-869755862a1f" 00:10:36.463 ], 00:10:36.463 "product_name": "Malloc disk", 00:10:36.463 "block_size": 512, 00:10:36.463 "num_blocks": 65536, 00:10:36.463 "uuid": "97e600a5-4d9d-443c-b3e5-869755862a1f", 00:10:36.463 "assigned_rate_limits": { 00:10:36.463 "rw_ios_per_sec": 0, 00:10:36.463 "rw_mbytes_per_sec": 0, 00:10:36.463 "r_mbytes_per_sec": 0, 00:10:36.463 "w_mbytes_per_sec": 0 00:10:36.463 }, 00:10:36.463 "claimed": true, 00:10:36.463 "claim_type": "exclusive_write", 00:10:36.463 "zoned": false, 00:10:36.463 "supported_io_types": { 00:10:36.463 "read": true, 00:10:36.463 "write": true, 00:10:36.463 "unmap": true, 00:10:36.463 "flush": true, 00:10:36.463 "reset": true, 00:10:36.463 "nvme_admin": false, 00:10:36.463 "nvme_io": false, 00:10:36.463 "nvme_io_md": false, 00:10:36.463 "write_zeroes": true, 00:10:36.463 "zcopy": true, 00:10:36.463 "get_zone_info": false, 00:10:36.463 "zone_management": false, 00:10:36.463 "zone_append": false, 00:10:36.463 "compare": false, 00:10:36.463 "compare_and_write": false, 00:10:36.463 "abort": true, 00:10:36.463 "seek_hole": false, 00:10:36.463 "seek_data": false, 00:10:36.463 "copy": true, 00:10:36.463 "nvme_iov_md": false 00:10:36.463 }, 00:10:36.463 "memory_domains": [ 00:10:36.463 { 00:10:36.463 "dma_device_id": "system", 00:10:36.463 "dma_device_type": 1 00:10:36.463 }, 00:10:36.463 { 00:10:36.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:36.463 "dma_device_type": 2 00:10:36.463 } 00:10:36.463 ], 00:10:36.463 "driver_specific": {} 00:10:36.463 } 00:10:36.463 ] 00:10:36.463 14:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.463 14:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:36.463 14:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:36.463 14:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:36.463 14:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:36.463 14:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:36.463 14:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:36.463 14:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:36.463 14:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:36.463 14:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:36.463 14:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:36.463 14:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:36.463 14:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:36.463 14:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:36.463 14:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:36.463 14:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.463 14:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:36.463 14:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.463 14:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:36.463 14:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:36.463 "name": "Existed_Raid", 00:10:36.463 "uuid": "efd82e66-7bcb-46ae-97bd-ca5d1f1478df", 00:10:36.463 "strip_size_kb": 64, 00:10:36.463 "state": "online", 00:10:36.463 "raid_level": "concat", 00:10:36.463 "superblock": false, 00:10:36.463 "num_base_bdevs": 3, 00:10:36.463 "num_base_bdevs_discovered": 3, 00:10:36.463 "num_base_bdevs_operational": 3, 00:10:36.463 "base_bdevs_list": [ 00:10:36.463 { 00:10:36.463 "name": "BaseBdev1", 00:10:36.463 "uuid": "7f61a145-8d10-451d-ad0d-8028cbecbe5e", 00:10:36.463 "is_configured": true, 00:10:36.463 "data_offset": 0, 00:10:36.463 "data_size": 65536 00:10:36.463 }, 00:10:36.463 { 00:10:36.463 "name": "BaseBdev2", 00:10:36.463 "uuid": "bcb1133d-ce40-44ce-a0a1-0243a5315334", 00:10:36.463 "is_configured": true, 00:10:36.463 "data_offset": 0, 00:10:36.463 "data_size": 65536 00:10:36.463 }, 00:10:36.463 { 00:10:36.463 "name": "BaseBdev3", 00:10:36.463 "uuid": "97e600a5-4d9d-443c-b3e5-869755862a1f", 00:10:36.463 "is_configured": true, 00:10:36.463 "data_offset": 0, 00:10:36.463 "data_size": 65536 00:10:36.463 } 00:10:36.463 ] 00:10:36.463 }' 00:10:36.463 14:10:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:36.463 14:10:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.032 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:37.032 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:37.032 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:37.032 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:37.032 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:37.032 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:37.032 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:37.032 14:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.032 14:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.032 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:37.032 [2024-11-27 14:10:14.017773] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:37.032 14:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.032 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:37.032 "name": "Existed_Raid", 00:10:37.032 "aliases": [ 00:10:37.032 "efd82e66-7bcb-46ae-97bd-ca5d1f1478df" 00:10:37.032 ], 00:10:37.032 "product_name": "Raid Volume", 00:10:37.032 "block_size": 512, 00:10:37.032 "num_blocks": 196608, 00:10:37.032 "uuid": "efd82e66-7bcb-46ae-97bd-ca5d1f1478df", 00:10:37.032 "assigned_rate_limits": { 00:10:37.032 "rw_ios_per_sec": 0, 00:10:37.032 "rw_mbytes_per_sec": 0, 00:10:37.032 "r_mbytes_per_sec": 0, 00:10:37.032 "w_mbytes_per_sec": 0 00:10:37.032 }, 00:10:37.032 "claimed": false, 00:10:37.032 "zoned": false, 00:10:37.032 "supported_io_types": { 00:10:37.032 "read": true, 00:10:37.032 "write": true, 00:10:37.032 "unmap": true, 00:10:37.032 "flush": true, 00:10:37.032 "reset": true, 00:10:37.032 "nvme_admin": false, 00:10:37.032 "nvme_io": false, 00:10:37.032 "nvme_io_md": false, 00:10:37.032 "write_zeroes": true, 00:10:37.032 "zcopy": false, 00:10:37.032 "get_zone_info": false, 00:10:37.032 "zone_management": false, 00:10:37.032 "zone_append": false, 00:10:37.032 "compare": false, 00:10:37.032 "compare_and_write": false, 00:10:37.032 "abort": false, 00:10:37.032 "seek_hole": false, 00:10:37.032 "seek_data": false, 00:10:37.032 "copy": false, 00:10:37.032 "nvme_iov_md": false 00:10:37.032 }, 00:10:37.032 "memory_domains": [ 00:10:37.032 { 00:10:37.032 "dma_device_id": "system", 00:10:37.032 "dma_device_type": 1 00:10:37.032 }, 00:10:37.032 { 00:10:37.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.032 "dma_device_type": 2 00:10:37.032 }, 00:10:37.032 { 00:10:37.032 "dma_device_id": "system", 00:10:37.032 "dma_device_type": 1 00:10:37.032 }, 00:10:37.032 { 00:10:37.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.032 "dma_device_type": 2 00:10:37.032 }, 00:10:37.032 { 00:10:37.032 "dma_device_id": "system", 00:10:37.032 "dma_device_type": 1 00:10:37.032 }, 00:10:37.032 { 00:10:37.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:37.032 "dma_device_type": 2 00:10:37.032 } 00:10:37.032 ], 00:10:37.032 "driver_specific": { 00:10:37.032 "raid": { 00:10:37.032 "uuid": "efd82e66-7bcb-46ae-97bd-ca5d1f1478df", 00:10:37.032 "strip_size_kb": 64, 00:10:37.032 "state": "online", 00:10:37.032 "raid_level": "concat", 00:10:37.033 "superblock": false, 00:10:37.033 "num_base_bdevs": 3, 00:10:37.033 "num_base_bdevs_discovered": 3, 00:10:37.033 "num_base_bdevs_operational": 3, 00:10:37.033 "base_bdevs_list": [ 00:10:37.033 { 00:10:37.033 "name": "BaseBdev1", 00:10:37.033 "uuid": "7f61a145-8d10-451d-ad0d-8028cbecbe5e", 00:10:37.033 "is_configured": true, 00:10:37.033 "data_offset": 0, 00:10:37.033 "data_size": 65536 00:10:37.033 }, 00:10:37.033 { 00:10:37.033 "name": "BaseBdev2", 00:10:37.033 "uuid": "bcb1133d-ce40-44ce-a0a1-0243a5315334", 00:10:37.033 "is_configured": true, 00:10:37.033 "data_offset": 0, 00:10:37.033 "data_size": 65536 00:10:37.033 }, 00:10:37.033 { 00:10:37.033 "name": "BaseBdev3", 00:10:37.033 "uuid": "97e600a5-4d9d-443c-b3e5-869755862a1f", 00:10:37.033 "is_configured": true, 00:10:37.033 "data_offset": 0, 00:10:37.033 "data_size": 65536 00:10:37.033 } 00:10:37.033 ] 00:10:37.033 } 00:10:37.033 } 00:10:37.033 }' 00:10:37.033 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:37.033 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:37.033 BaseBdev2 00:10:37.033 BaseBdev3' 00:10:37.033 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.033 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:37.033 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.033 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:37.033 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.033 14:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.033 14:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.033 14:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.033 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.033 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.033 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.033 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:37.033 14:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.033 14:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.033 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.033 14:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.293 [2024-11-27 14:10:14.397625] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:37.293 [2024-11-27 14:10:14.397667] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:37.293 [2024-11-27 14:10:14.397752] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:37.293 "name": "Existed_Raid", 00:10:37.293 "uuid": "efd82e66-7bcb-46ae-97bd-ca5d1f1478df", 00:10:37.293 "strip_size_kb": 64, 00:10:37.293 "state": "offline", 00:10:37.293 "raid_level": "concat", 00:10:37.293 "superblock": false, 00:10:37.293 "num_base_bdevs": 3, 00:10:37.293 "num_base_bdevs_discovered": 2, 00:10:37.293 "num_base_bdevs_operational": 2, 00:10:37.293 "base_bdevs_list": [ 00:10:37.293 { 00:10:37.293 "name": null, 00:10:37.293 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.293 "is_configured": false, 00:10:37.293 "data_offset": 0, 00:10:37.293 "data_size": 65536 00:10:37.293 }, 00:10:37.293 { 00:10:37.293 "name": "BaseBdev2", 00:10:37.293 "uuid": "bcb1133d-ce40-44ce-a0a1-0243a5315334", 00:10:37.293 "is_configured": true, 00:10:37.293 "data_offset": 0, 00:10:37.293 "data_size": 65536 00:10:37.293 }, 00:10:37.293 { 00:10:37.293 "name": "BaseBdev3", 00:10:37.293 "uuid": "97e600a5-4d9d-443c-b3e5-869755862a1f", 00:10:37.293 "is_configured": true, 00:10:37.293 "data_offset": 0, 00:10:37.293 "data_size": 65536 00:10:37.293 } 00:10:37.293 ] 00:10:37.293 }' 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:37.293 14:10:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.861 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:37.861 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:37.861 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:37.861 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:37.861 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.861 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.861 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:37.861 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:37.861 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:37.861 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:37.861 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:37.861 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.861 [2024-11-27 14:10:15.056658] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.120 [2024-11-27 14:10:15.205485] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:38.120 [2024-11-27 14:10:15.205555] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.120 BaseBdev2 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.120 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.121 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.121 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.121 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.379 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.379 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.380 [ 00:10:38.380 { 00:10:38.380 "name": "BaseBdev2", 00:10:38.380 "aliases": [ 00:10:38.380 "9d6f5a4d-bb52-4db8-be83-b460989b9b87" 00:10:38.380 ], 00:10:38.380 "product_name": "Malloc disk", 00:10:38.380 "block_size": 512, 00:10:38.380 "num_blocks": 65536, 00:10:38.380 "uuid": "9d6f5a4d-bb52-4db8-be83-b460989b9b87", 00:10:38.380 "assigned_rate_limits": { 00:10:38.380 "rw_ios_per_sec": 0, 00:10:38.380 "rw_mbytes_per_sec": 0, 00:10:38.380 "r_mbytes_per_sec": 0, 00:10:38.380 "w_mbytes_per_sec": 0 00:10:38.380 }, 00:10:38.380 "claimed": false, 00:10:38.380 "zoned": false, 00:10:38.380 "supported_io_types": { 00:10:38.380 "read": true, 00:10:38.380 "write": true, 00:10:38.380 "unmap": true, 00:10:38.380 "flush": true, 00:10:38.380 "reset": true, 00:10:38.380 "nvme_admin": false, 00:10:38.380 "nvme_io": false, 00:10:38.380 "nvme_io_md": false, 00:10:38.380 "write_zeroes": true, 00:10:38.380 "zcopy": true, 00:10:38.380 "get_zone_info": false, 00:10:38.380 "zone_management": false, 00:10:38.380 "zone_append": false, 00:10:38.380 "compare": false, 00:10:38.380 "compare_and_write": false, 00:10:38.380 "abort": true, 00:10:38.380 "seek_hole": false, 00:10:38.380 "seek_data": false, 00:10:38.380 "copy": true, 00:10:38.380 "nvme_iov_md": false 00:10:38.380 }, 00:10:38.380 "memory_domains": [ 00:10:38.380 { 00:10:38.380 "dma_device_id": "system", 00:10:38.380 "dma_device_type": 1 00:10:38.380 }, 00:10:38.380 { 00:10:38.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.380 "dma_device_type": 2 00:10:38.380 } 00:10:38.380 ], 00:10:38.380 "driver_specific": {} 00:10:38.380 } 00:10:38.380 ] 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.380 BaseBdev3 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.380 [ 00:10:38.380 { 00:10:38.380 "name": "BaseBdev3", 00:10:38.380 "aliases": [ 00:10:38.380 "cd388e70-399a-422f-8390-b742cf2f9432" 00:10:38.380 ], 00:10:38.380 "product_name": "Malloc disk", 00:10:38.380 "block_size": 512, 00:10:38.380 "num_blocks": 65536, 00:10:38.380 "uuid": "cd388e70-399a-422f-8390-b742cf2f9432", 00:10:38.380 "assigned_rate_limits": { 00:10:38.380 "rw_ios_per_sec": 0, 00:10:38.380 "rw_mbytes_per_sec": 0, 00:10:38.380 "r_mbytes_per_sec": 0, 00:10:38.380 "w_mbytes_per_sec": 0 00:10:38.380 }, 00:10:38.380 "claimed": false, 00:10:38.380 "zoned": false, 00:10:38.380 "supported_io_types": { 00:10:38.380 "read": true, 00:10:38.380 "write": true, 00:10:38.380 "unmap": true, 00:10:38.380 "flush": true, 00:10:38.380 "reset": true, 00:10:38.380 "nvme_admin": false, 00:10:38.380 "nvme_io": false, 00:10:38.380 "nvme_io_md": false, 00:10:38.380 "write_zeroes": true, 00:10:38.380 "zcopy": true, 00:10:38.380 "get_zone_info": false, 00:10:38.380 "zone_management": false, 00:10:38.380 "zone_append": false, 00:10:38.380 "compare": false, 00:10:38.380 "compare_and_write": false, 00:10:38.380 "abort": true, 00:10:38.380 "seek_hole": false, 00:10:38.380 "seek_data": false, 00:10:38.380 "copy": true, 00:10:38.380 "nvme_iov_md": false 00:10:38.380 }, 00:10:38.380 "memory_domains": [ 00:10:38.380 { 00:10:38.380 "dma_device_id": "system", 00:10:38.380 "dma_device_type": 1 00:10:38.380 }, 00:10:38.380 { 00:10:38.380 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.380 "dma_device_type": 2 00:10:38.380 } 00:10:38.380 ], 00:10:38.380 "driver_specific": {} 00:10:38.380 } 00:10:38.380 ] 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.380 [2024-11-27 14:10:15.520587] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:38.380 [2024-11-27 14:10:15.520664] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:38.380 [2024-11-27 14:10:15.520704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:38.380 [2024-11-27 14:10:15.523429] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.380 "name": "Existed_Raid", 00:10:38.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.380 "strip_size_kb": 64, 00:10:38.380 "state": "configuring", 00:10:38.380 "raid_level": "concat", 00:10:38.380 "superblock": false, 00:10:38.380 "num_base_bdevs": 3, 00:10:38.380 "num_base_bdevs_discovered": 2, 00:10:38.380 "num_base_bdevs_operational": 3, 00:10:38.380 "base_bdevs_list": [ 00:10:38.380 { 00:10:38.380 "name": "BaseBdev1", 00:10:38.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.380 "is_configured": false, 00:10:38.380 "data_offset": 0, 00:10:38.380 "data_size": 0 00:10:38.380 }, 00:10:38.380 { 00:10:38.380 "name": "BaseBdev2", 00:10:38.380 "uuid": "9d6f5a4d-bb52-4db8-be83-b460989b9b87", 00:10:38.380 "is_configured": true, 00:10:38.380 "data_offset": 0, 00:10:38.380 "data_size": 65536 00:10:38.380 }, 00:10:38.380 { 00:10:38.380 "name": "BaseBdev3", 00:10:38.380 "uuid": "cd388e70-399a-422f-8390-b742cf2f9432", 00:10:38.380 "is_configured": true, 00:10:38.380 "data_offset": 0, 00:10:38.380 "data_size": 65536 00:10:38.380 } 00:10:38.380 ] 00:10:38.380 }' 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.380 14:10:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.947 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:38.947 14:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.947 14:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.947 [2024-11-27 14:10:16.068756] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:38.947 14:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.947 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:38.947 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:38.947 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:38.947 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:38.947 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:38.947 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:38.947 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:38.947 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:38.947 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:38.947 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:38.947 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:38.947 14:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.947 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.947 14:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.947 14:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.947 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:38.947 "name": "Existed_Raid", 00:10:38.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.947 "strip_size_kb": 64, 00:10:38.947 "state": "configuring", 00:10:38.947 "raid_level": "concat", 00:10:38.947 "superblock": false, 00:10:38.947 "num_base_bdevs": 3, 00:10:38.947 "num_base_bdevs_discovered": 1, 00:10:38.947 "num_base_bdevs_operational": 3, 00:10:38.947 "base_bdevs_list": [ 00:10:38.947 { 00:10:38.947 "name": "BaseBdev1", 00:10:38.947 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.947 "is_configured": false, 00:10:38.947 "data_offset": 0, 00:10:38.947 "data_size": 0 00:10:38.947 }, 00:10:38.947 { 00:10:38.947 "name": null, 00:10:38.947 "uuid": "9d6f5a4d-bb52-4db8-be83-b460989b9b87", 00:10:38.947 "is_configured": false, 00:10:38.947 "data_offset": 0, 00:10:38.947 "data_size": 65536 00:10:38.947 }, 00:10:38.947 { 00:10:38.947 "name": "BaseBdev3", 00:10:38.947 "uuid": "cd388e70-399a-422f-8390-b742cf2f9432", 00:10:38.948 "is_configured": true, 00:10:38.948 "data_offset": 0, 00:10:38.948 "data_size": 65536 00:10:38.948 } 00:10:38.948 ] 00:10:38.948 }' 00:10:38.948 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:38.948 14:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.515 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:39.515 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.515 14:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.515 14:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.515 14:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.516 [2024-11-27 14:10:16.701688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:39.516 BaseBdev1 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.516 [ 00:10:39.516 { 00:10:39.516 "name": "BaseBdev1", 00:10:39.516 "aliases": [ 00:10:39.516 "59b1fed9-71af-443c-9d3c-37ccef120a50" 00:10:39.516 ], 00:10:39.516 "product_name": "Malloc disk", 00:10:39.516 "block_size": 512, 00:10:39.516 "num_blocks": 65536, 00:10:39.516 "uuid": "59b1fed9-71af-443c-9d3c-37ccef120a50", 00:10:39.516 "assigned_rate_limits": { 00:10:39.516 "rw_ios_per_sec": 0, 00:10:39.516 "rw_mbytes_per_sec": 0, 00:10:39.516 "r_mbytes_per_sec": 0, 00:10:39.516 "w_mbytes_per_sec": 0 00:10:39.516 }, 00:10:39.516 "claimed": true, 00:10:39.516 "claim_type": "exclusive_write", 00:10:39.516 "zoned": false, 00:10:39.516 "supported_io_types": { 00:10:39.516 "read": true, 00:10:39.516 "write": true, 00:10:39.516 "unmap": true, 00:10:39.516 "flush": true, 00:10:39.516 "reset": true, 00:10:39.516 "nvme_admin": false, 00:10:39.516 "nvme_io": false, 00:10:39.516 "nvme_io_md": false, 00:10:39.516 "write_zeroes": true, 00:10:39.516 "zcopy": true, 00:10:39.516 "get_zone_info": false, 00:10:39.516 "zone_management": false, 00:10:39.516 "zone_append": false, 00:10:39.516 "compare": false, 00:10:39.516 "compare_and_write": false, 00:10:39.516 "abort": true, 00:10:39.516 "seek_hole": false, 00:10:39.516 "seek_data": false, 00:10:39.516 "copy": true, 00:10:39.516 "nvme_iov_md": false 00:10:39.516 }, 00:10:39.516 "memory_domains": [ 00:10:39.516 { 00:10:39.516 "dma_device_id": "system", 00:10:39.516 "dma_device_type": 1 00:10:39.516 }, 00:10:39.516 { 00:10:39.516 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:39.516 "dma_device_type": 2 00:10:39.516 } 00:10:39.516 ], 00:10:39.516 "driver_specific": {} 00:10:39.516 } 00:10:39.516 ] 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.516 14:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:39.803 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:39.803 "name": "Existed_Raid", 00:10:39.803 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.803 "strip_size_kb": 64, 00:10:39.803 "state": "configuring", 00:10:39.803 "raid_level": "concat", 00:10:39.803 "superblock": false, 00:10:39.803 "num_base_bdevs": 3, 00:10:39.803 "num_base_bdevs_discovered": 2, 00:10:39.803 "num_base_bdevs_operational": 3, 00:10:39.803 "base_bdevs_list": [ 00:10:39.803 { 00:10:39.803 "name": "BaseBdev1", 00:10:39.803 "uuid": "59b1fed9-71af-443c-9d3c-37ccef120a50", 00:10:39.803 "is_configured": true, 00:10:39.803 "data_offset": 0, 00:10:39.803 "data_size": 65536 00:10:39.803 }, 00:10:39.803 { 00:10:39.803 "name": null, 00:10:39.803 "uuid": "9d6f5a4d-bb52-4db8-be83-b460989b9b87", 00:10:39.803 "is_configured": false, 00:10:39.803 "data_offset": 0, 00:10:39.803 "data_size": 65536 00:10:39.803 }, 00:10:39.803 { 00:10:39.803 "name": "BaseBdev3", 00:10:39.803 "uuid": "cd388e70-399a-422f-8390-b742cf2f9432", 00:10:39.803 "is_configured": true, 00:10:39.803 "data_offset": 0, 00:10:39.803 "data_size": 65536 00:10:39.803 } 00:10:39.803 ] 00:10:39.803 }' 00:10:39.803 14:10:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:39.803 14:10:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.063 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.063 14:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.063 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:40.063 14:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.063 14:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.063 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:40.063 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:40.063 14:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.063 14:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.064 [2024-11-27 14:10:17.306023] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:40.064 14:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.064 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:40.064 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.064 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.064 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.064 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.064 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.064 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.064 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.064 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.064 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.064 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.064 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.064 14:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.064 14:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.064 14:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.328 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.328 "name": "Existed_Raid", 00:10:40.328 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.328 "strip_size_kb": 64, 00:10:40.328 "state": "configuring", 00:10:40.328 "raid_level": "concat", 00:10:40.328 "superblock": false, 00:10:40.328 "num_base_bdevs": 3, 00:10:40.328 "num_base_bdevs_discovered": 1, 00:10:40.328 "num_base_bdevs_operational": 3, 00:10:40.328 "base_bdevs_list": [ 00:10:40.328 { 00:10:40.328 "name": "BaseBdev1", 00:10:40.328 "uuid": "59b1fed9-71af-443c-9d3c-37ccef120a50", 00:10:40.328 "is_configured": true, 00:10:40.328 "data_offset": 0, 00:10:40.328 "data_size": 65536 00:10:40.328 }, 00:10:40.328 { 00:10:40.328 "name": null, 00:10:40.328 "uuid": "9d6f5a4d-bb52-4db8-be83-b460989b9b87", 00:10:40.328 "is_configured": false, 00:10:40.328 "data_offset": 0, 00:10:40.328 "data_size": 65536 00:10:40.328 }, 00:10:40.328 { 00:10:40.328 "name": null, 00:10:40.328 "uuid": "cd388e70-399a-422f-8390-b742cf2f9432", 00:10:40.328 "is_configured": false, 00:10:40.328 "data_offset": 0, 00:10:40.328 "data_size": 65536 00:10:40.328 } 00:10:40.328 ] 00:10:40.328 }' 00:10:40.328 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.328 14:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.589 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.589 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:40.589 14:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.589 14:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.589 14:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.848 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:40.848 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:40.848 14:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.848 14:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.848 [2024-11-27 14:10:17.902282] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:40.848 14:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.848 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:40.848 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:40.848 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:40.848 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:40.848 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:40.848 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:40.848 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:40.848 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:40.848 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:40.848 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:40.848 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:40.848 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.848 14:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.848 14:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:40.848 14:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.848 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:40.849 "name": "Existed_Raid", 00:10:40.849 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.849 "strip_size_kb": 64, 00:10:40.849 "state": "configuring", 00:10:40.849 "raid_level": "concat", 00:10:40.849 "superblock": false, 00:10:40.849 "num_base_bdevs": 3, 00:10:40.849 "num_base_bdevs_discovered": 2, 00:10:40.849 "num_base_bdevs_operational": 3, 00:10:40.849 "base_bdevs_list": [ 00:10:40.849 { 00:10:40.849 "name": "BaseBdev1", 00:10:40.849 "uuid": "59b1fed9-71af-443c-9d3c-37ccef120a50", 00:10:40.849 "is_configured": true, 00:10:40.849 "data_offset": 0, 00:10:40.849 "data_size": 65536 00:10:40.849 }, 00:10:40.849 { 00:10:40.849 "name": null, 00:10:40.849 "uuid": "9d6f5a4d-bb52-4db8-be83-b460989b9b87", 00:10:40.849 "is_configured": false, 00:10:40.849 "data_offset": 0, 00:10:40.849 "data_size": 65536 00:10:40.849 }, 00:10:40.849 { 00:10:40.849 "name": "BaseBdev3", 00:10:40.849 "uuid": "cd388e70-399a-422f-8390-b742cf2f9432", 00:10:40.849 "is_configured": true, 00:10:40.849 "data_offset": 0, 00:10:40.849 "data_size": 65536 00:10:40.849 } 00:10:40.849 ] 00:10:40.849 }' 00:10:40.849 14:10:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:40.849 14:10:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.416 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.416 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:41.416 14:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.417 14:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.417 14:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.417 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:41.417 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:41.417 14:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.417 14:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.417 [2024-11-27 14:10:18.562510] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:41.417 14:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.417 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:41.417 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:41.417 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:41.417 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:41.417 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:41.417 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:41.417 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:41.417 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:41.417 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:41.417 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:41.417 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:41.417 14:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:41.417 14:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.417 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.417 14:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:41.676 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:41.676 "name": "Existed_Raid", 00:10:41.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.676 "strip_size_kb": 64, 00:10:41.676 "state": "configuring", 00:10:41.676 "raid_level": "concat", 00:10:41.676 "superblock": false, 00:10:41.676 "num_base_bdevs": 3, 00:10:41.676 "num_base_bdevs_discovered": 1, 00:10:41.676 "num_base_bdevs_operational": 3, 00:10:41.676 "base_bdevs_list": [ 00:10:41.676 { 00:10:41.676 "name": null, 00:10:41.676 "uuid": "59b1fed9-71af-443c-9d3c-37ccef120a50", 00:10:41.676 "is_configured": false, 00:10:41.676 "data_offset": 0, 00:10:41.676 "data_size": 65536 00:10:41.676 }, 00:10:41.676 { 00:10:41.676 "name": null, 00:10:41.676 "uuid": "9d6f5a4d-bb52-4db8-be83-b460989b9b87", 00:10:41.676 "is_configured": false, 00:10:41.676 "data_offset": 0, 00:10:41.676 "data_size": 65536 00:10:41.676 }, 00:10:41.676 { 00:10:41.676 "name": "BaseBdev3", 00:10:41.676 "uuid": "cd388e70-399a-422f-8390-b742cf2f9432", 00:10:41.676 "is_configured": true, 00:10:41.676 "data_offset": 0, 00:10:41.676 "data_size": 65536 00:10:41.676 } 00:10:41.676 ] 00:10:41.676 }' 00:10:41.676 14:10:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:41.676 14:10:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.243 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:42.243 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.243 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.243 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.243 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.243 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:42.243 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:42.243 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.243 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.243 [2024-11-27 14:10:19.289480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:42.243 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.243 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:42.243 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.244 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:42.244 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.244 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.244 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.244 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.244 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.244 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.244 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.244 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.244 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.244 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.244 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.244 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.244 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.244 "name": "Existed_Raid", 00:10:42.244 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.244 "strip_size_kb": 64, 00:10:42.244 "state": "configuring", 00:10:42.244 "raid_level": "concat", 00:10:42.244 "superblock": false, 00:10:42.244 "num_base_bdevs": 3, 00:10:42.244 "num_base_bdevs_discovered": 2, 00:10:42.244 "num_base_bdevs_operational": 3, 00:10:42.244 "base_bdevs_list": [ 00:10:42.244 { 00:10:42.244 "name": null, 00:10:42.244 "uuid": "59b1fed9-71af-443c-9d3c-37ccef120a50", 00:10:42.244 "is_configured": false, 00:10:42.244 "data_offset": 0, 00:10:42.244 "data_size": 65536 00:10:42.244 }, 00:10:42.244 { 00:10:42.244 "name": "BaseBdev2", 00:10:42.244 "uuid": "9d6f5a4d-bb52-4db8-be83-b460989b9b87", 00:10:42.244 "is_configured": true, 00:10:42.244 "data_offset": 0, 00:10:42.244 "data_size": 65536 00:10:42.244 }, 00:10:42.244 { 00:10:42.244 "name": "BaseBdev3", 00:10:42.244 "uuid": "cd388e70-399a-422f-8390-b742cf2f9432", 00:10:42.244 "is_configured": true, 00:10:42.244 "data_offset": 0, 00:10:42.244 "data_size": 65536 00:10:42.244 } 00:10:42.244 ] 00:10:42.244 }' 00:10:42.244 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.244 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.810 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.810 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:42.810 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.810 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.810 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.810 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:42.810 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.810 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.810 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.810 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:42.810 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.810 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 59b1fed9-71af-443c-9d3c-37ccef120a50 00:10:42.810 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.810 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.810 [2024-11-27 14:10:19.943148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:42.810 [2024-11-27 14:10:19.943513] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:42.810 [2024-11-27 14:10:19.943545] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:42.810 [2024-11-27 14:10:19.943921] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:42.810 [2024-11-27 14:10:19.944136] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:42.810 [2024-11-27 14:10:19.944153] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:42.810 [2024-11-27 14:10:19.944515] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:42.810 NewBaseBdev 00:10:42.810 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.810 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:42.810 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:42.810 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:42.810 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:10:42.810 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:42.810 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:42.810 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:42.810 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.810 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.810 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.810 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:42.810 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.810 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.810 [ 00:10:42.810 { 00:10:42.810 "name": "NewBaseBdev", 00:10:42.810 "aliases": [ 00:10:42.810 "59b1fed9-71af-443c-9d3c-37ccef120a50" 00:10:42.810 ], 00:10:42.810 "product_name": "Malloc disk", 00:10:42.810 "block_size": 512, 00:10:42.810 "num_blocks": 65536, 00:10:42.810 "uuid": "59b1fed9-71af-443c-9d3c-37ccef120a50", 00:10:42.810 "assigned_rate_limits": { 00:10:42.810 "rw_ios_per_sec": 0, 00:10:42.810 "rw_mbytes_per_sec": 0, 00:10:42.810 "r_mbytes_per_sec": 0, 00:10:42.810 "w_mbytes_per_sec": 0 00:10:42.810 }, 00:10:42.810 "claimed": true, 00:10:42.810 "claim_type": "exclusive_write", 00:10:42.810 "zoned": false, 00:10:42.810 "supported_io_types": { 00:10:42.810 "read": true, 00:10:42.810 "write": true, 00:10:42.810 "unmap": true, 00:10:42.810 "flush": true, 00:10:42.810 "reset": true, 00:10:42.810 "nvme_admin": false, 00:10:42.810 "nvme_io": false, 00:10:42.810 "nvme_io_md": false, 00:10:42.810 "write_zeroes": true, 00:10:42.810 "zcopy": true, 00:10:42.810 "get_zone_info": false, 00:10:42.810 "zone_management": false, 00:10:42.810 "zone_append": false, 00:10:42.810 "compare": false, 00:10:42.810 "compare_and_write": false, 00:10:42.810 "abort": true, 00:10:42.810 "seek_hole": false, 00:10:42.810 "seek_data": false, 00:10:42.810 "copy": true, 00:10:42.810 "nvme_iov_md": false 00:10:42.810 }, 00:10:42.811 "memory_domains": [ 00:10:42.811 { 00:10:42.811 "dma_device_id": "system", 00:10:42.811 "dma_device_type": 1 00:10:42.811 }, 00:10:42.811 { 00:10:42.811 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:42.811 "dma_device_type": 2 00:10:42.811 } 00:10:42.811 ], 00:10:42.811 "driver_specific": {} 00:10:42.811 } 00:10:42.811 ] 00:10:42.811 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.811 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:10:42.811 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:42.811 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:42.811 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:42.811 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:42.811 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:42.811 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:42.811 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:42.811 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:42.811 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:42.811 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:42.811 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:42.811 14:10:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.811 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:42.811 14:10:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.811 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:42.811 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:42.811 "name": "Existed_Raid", 00:10:42.811 "uuid": "c095e554-e700-4f9a-8504-b01ffa25fefd", 00:10:42.811 "strip_size_kb": 64, 00:10:42.811 "state": "online", 00:10:42.811 "raid_level": "concat", 00:10:42.811 "superblock": false, 00:10:42.811 "num_base_bdevs": 3, 00:10:42.811 "num_base_bdevs_discovered": 3, 00:10:42.811 "num_base_bdevs_operational": 3, 00:10:42.811 "base_bdevs_list": [ 00:10:42.811 { 00:10:42.811 "name": "NewBaseBdev", 00:10:42.811 "uuid": "59b1fed9-71af-443c-9d3c-37ccef120a50", 00:10:42.811 "is_configured": true, 00:10:42.811 "data_offset": 0, 00:10:42.811 "data_size": 65536 00:10:42.811 }, 00:10:42.811 { 00:10:42.811 "name": "BaseBdev2", 00:10:42.811 "uuid": "9d6f5a4d-bb52-4db8-be83-b460989b9b87", 00:10:42.811 "is_configured": true, 00:10:42.811 "data_offset": 0, 00:10:42.811 "data_size": 65536 00:10:42.811 }, 00:10:42.811 { 00:10:42.811 "name": "BaseBdev3", 00:10:42.811 "uuid": "cd388e70-399a-422f-8390-b742cf2f9432", 00:10:42.811 "is_configured": true, 00:10:42.811 "data_offset": 0, 00:10:42.811 "data_size": 65536 00:10:42.811 } 00:10:42.811 ] 00:10:42.811 }' 00:10:42.811 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:42.811 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.457 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:43.457 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:43.457 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:43.457 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:43.457 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:43.457 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:43.457 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:43.457 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:43.457 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.457 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.457 [2024-11-27 14:10:20.535864] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:43.457 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.457 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:43.457 "name": "Existed_Raid", 00:10:43.457 "aliases": [ 00:10:43.457 "c095e554-e700-4f9a-8504-b01ffa25fefd" 00:10:43.457 ], 00:10:43.457 "product_name": "Raid Volume", 00:10:43.457 "block_size": 512, 00:10:43.457 "num_blocks": 196608, 00:10:43.457 "uuid": "c095e554-e700-4f9a-8504-b01ffa25fefd", 00:10:43.457 "assigned_rate_limits": { 00:10:43.457 "rw_ios_per_sec": 0, 00:10:43.457 "rw_mbytes_per_sec": 0, 00:10:43.457 "r_mbytes_per_sec": 0, 00:10:43.457 "w_mbytes_per_sec": 0 00:10:43.457 }, 00:10:43.457 "claimed": false, 00:10:43.457 "zoned": false, 00:10:43.457 "supported_io_types": { 00:10:43.457 "read": true, 00:10:43.457 "write": true, 00:10:43.457 "unmap": true, 00:10:43.457 "flush": true, 00:10:43.457 "reset": true, 00:10:43.457 "nvme_admin": false, 00:10:43.457 "nvme_io": false, 00:10:43.457 "nvme_io_md": false, 00:10:43.457 "write_zeroes": true, 00:10:43.457 "zcopy": false, 00:10:43.457 "get_zone_info": false, 00:10:43.457 "zone_management": false, 00:10:43.457 "zone_append": false, 00:10:43.457 "compare": false, 00:10:43.457 "compare_and_write": false, 00:10:43.457 "abort": false, 00:10:43.457 "seek_hole": false, 00:10:43.457 "seek_data": false, 00:10:43.457 "copy": false, 00:10:43.457 "nvme_iov_md": false 00:10:43.457 }, 00:10:43.457 "memory_domains": [ 00:10:43.457 { 00:10:43.457 "dma_device_id": "system", 00:10:43.457 "dma_device_type": 1 00:10:43.457 }, 00:10:43.457 { 00:10:43.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.457 "dma_device_type": 2 00:10:43.457 }, 00:10:43.457 { 00:10:43.457 "dma_device_id": "system", 00:10:43.457 "dma_device_type": 1 00:10:43.457 }, 00:10:43.457 { 00:10:43.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.457 "dma_device_type": 2 00:10:43.457 }, 00:10:43.457 { 00:10:43.457 "dma_device_id": "system", 00:10:43.457 "dma_device_type": 1 00:10:43.457 }, 00:10:43.457 { 00:10:43.457 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.457 "dma_device_type": 2 00:10:43.457 } 00:10:43.457 ], 00:10:43.457 "driver_specific": { 00:10:43.457 "raid": { 00:10:43.457 "uuid": "c095e554-e700-4f9a-8504-b01ffa25fefd", 00:10:43.457 "strip_size_kb": 64, 00:10:43.457 "state": "online", 00:10:43.457 "raid_level": "concat", 00:10:43.457 "superblock": false, 00:10:43.457 "num_base_bdevs": 3, 00:10:43.457 "num_base_bdevs_discovered": 3, 00:10:43.457 "num_base_bdevs_operational": 3, 00:10:43.457 "base_bdevs_list": [ 00:10:43.457 { 00:10:43.457 "name": "NewBaseBdev", 00:10:43.457 "uuid": "59b1fed9-71af-443c-9d3c-37ccef120a50", 00:10:43.457 "is_configured": true, 00:10:43.457 "data_offset": 0, 00:10:43.457 "data_size": 65536 00:10:43.457 }, 00:10:43.457 { 00:10:43.457 "name": "BaseBdev2", 00:10:43.457 "uuid": "9d6f5a4d-bb52-4db8-be83-b460989b9b87", 00:10:43.457 "is_configured": true, 00:10:43.457 "data_offset": 0, 00:10:43.457 "data_size": 65536 00:10:43.457 }, 00:10:43.457 { 00:10:43.457 "name": "BaseBdev3", 00:10:43.457 "uuid": "cd388e70-399a-422f-8390-b742cf2f9432", 00:10:43.457 "is_configured": true, 00:10:43.457 "data_offset": 0, 00:10:43.457 "data_size": 65536 00:10:43.457 } 00:10:43.457 ] 00:10:43.457 } 00:10:43.457 } 00:10:43.457 }' 00:10:43.457 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:43.457 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:43.457 BaseBdev2 00:10:43.457 BaseBdev3' 00:10:43.457 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.457 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:43.458 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.458 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:43.458 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.458 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.458 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.458 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.716 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.716 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.716 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.716 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.716 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:43.716 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.716 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.716 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.716 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.716 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.716 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:43.716 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:43.716 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.716 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.716 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:43.716 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.716 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:43.716 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:43.716 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:43.716 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:43.716 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:43.716 [2024-11-27 14:10:20.855550] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:43.716 [2024-11-27 14:10:20.855723] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:43.716 [2024-11-27 14:10:20.855866] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:43.716 [2024-11-27 14:10:20.855943] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:43.716 [2024-11-27 14:10:20.855964] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:43.716 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:43.716 14:10:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 65524 00:10:43.716 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 65524 ']' 00:10:43.716 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 65524 00:10:43.716 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:10:43.716 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:43.716 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65524 00:10:43.716 killing process with pid 65524 00:10:43.716 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:43.716 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:43.716 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65524' 00:10:43.716 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 65524 00:10:43.716 [2024-11-27 14:10:20.894249] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:43.716 14:10:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 65524 00:10:43.975 [2024-11-27 14:10:21.178669] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:45.351 ************************************ 00:10:45.351 END TEST raid_state_function_test 00:10:45.351 ************************************ 00:10:45.351 14:10:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:10:45.351 00:10:45.351 real 0m12.214s 00:10:45.351 user 0m20.303s 00:10:45.351 sys 0m1.656s 00:10:45.351 14:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:45.351 14:10:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:45.351 14:10:22 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:10:45.351 14:10:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:10:45.351 14:10:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:45.351 14:10:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:45.351 ************************************ 00:10:45.351 START TEST raid_state_function_test_sb 00:10:45.351 ************************************ 00:10:45.351 14:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 3 true 00:10:45.351 14:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:10:45.351 14:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:10:45.351 14:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:10:45.351 14:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:10:45.351 14:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:10:45.351 14:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:45.351 14:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:10:45.351 14:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:45.351 14:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:45.351 14:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:10:45.351 14:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:45.352 14:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:45.352 14:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:10:45.352 14:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:10:45.352 14:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:10:45.352 14:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:45.352 14:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:10:45.352 14:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:10:45.352 14:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:10:45.352 Process raid pid: 66162 00:10:45.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:45.352 14:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:10:45.352 14:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:10:45.352 14:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:10:45.352 14:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:10:45.352 14:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:10:45.352 14:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:10:45.352 14:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:10:45.352 14:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=66162 00:10:45.352 14:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 66162' 00:10:45.352 14:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 66162 00:10:45.352 14:10:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:10:45.352 14:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 66162 ']' 00:10:45.352 14:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:45.352 14:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:45.352 14:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:45.352 14:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:45.352 14:10:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:45.352 [2024-11-27 14:10:22.451470] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:10:45.352 [2024-11-27 14:10:22.451856] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:45.611 [2024-11-27 14:10:22.648335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:45.611 [2024-11-27 14:10:22.807836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:45.869 [2024-11-27 14:10:23.035393] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:45.869 [2024-11-27 14:10:23.035653] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:46.436 14:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:46.436 14:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:10:46.436 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:46.436 14:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.436 14:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.436 [2024-11-27 14:10:23.413682] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:46.436 [2024-11-27 14:10:23.413953] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:46.436 [2024-11-27 14:10:23.414081] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:46.436 [2024-11-27 14:10:23.414228] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:46.436 [2024-11-27 14:10:23.414337] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:46.436 [2024-11-27 14:10:23.414495] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:46.436 14:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.437 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:46.437 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.437 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.437 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.437 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.437 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:46.437 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.437 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.437 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.437 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.437 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.437 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.437 14:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.437 14:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.437 14:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.437 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.437 "name": "Existed_Raid", 00:10:46.437 "uuid": "4674167c-6e3a-43aa-b117-c3b81c87e486", 00:10:46.437 "strip_size_kb": 64, 00:10:46.437 "state": "configuring", 00:10:46.437 "raid_level": "concat", 00:10:46.437 "superblock": true, 00:10:46.437 "num_base_bdevs": 3, 00:10:46.437 "num_base_bdevs_discovered": 0, 00:10:46.437 "num_base_bdevs_operational": 3, 00:10:46.437 "base_bdevs_list": [ 00:10:46.437 { 00:10:46.437 "name": "BaseBdev1", 00:10:46.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.437 "is_configured": false, 00:10:46.437 "data_offset": 0, 00:10:46.437 "data_size": 0 00:10:46.437 }, 00:10:46.437 { 00:10:46.437 "name": "BaseBdev2", 00:10:46.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.437 "is_configured": false, 00:10:46.437 "data_offset": 0, 00:10:46.437 "data_size": 0 00:10:46.437 }, 00:10:46.437 { 00:10:46.437 "name": "BaseBdev3", 00:10:46.437 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.437 "is_configured": false, 00:10:46.437 "data_offset": 0, 00:10:46.437 "data_size": 0 00:10:46.437 } 00:10:46.437 ] 00:10:46.437 }' 00:10:46.437 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.437 14:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.695 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:46.695 14:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.695 14:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.695 [2024-11-27 14:10:23.941769] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:46.696 [2024-11-27 14:10:23.941845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:10:46.696 14:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.696 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:46.696 14:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.696 14:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.696 [2024-11-27 14:10:23.949767] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:46.696 [2024-11-27 14:10:23.949866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:46.696 [2024-11-27 14:10:23.949882] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:46.696 [2024-11-27 14:10:23.949898] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:46.696 [2024-11-27 14:10:23.949908] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:46.696 [2024-11-27 14:10:23.949922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:46.696 14:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.696 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:46.696 14:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.696 14:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.954 [2024-11-27 14:10:23.999803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:46.954 BaseBdev1 00:10:46.954 14:10:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.954 14:10:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:10:46.954 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:46.954 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:46.954 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:46.954 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:46.954 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:46.954 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:46.954 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.954 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.954 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.954 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:46.954 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.954 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.954 [ 00:10:46.954 { 00:10:46.954 "name": "BaseBdev1", 00:10:46.954 "aliases": [ 00:10:46.954 "c3f9847c-e4a3-4819-93d9-5107671a07fa" 00:10:46.954 ], 00:10:46.954 "product_name": "Malloc disk", 00:10:46.954 "block_size": 512, 00:10:46.954 "num_blocks": 65536, 00:10:46.954 "uuid": "c3f9847c-e4a3-4819-93d9-5107671a07fa", 00:10:46.954 "assigned_rate_limits": { 00:10:46.954 "rw_ios_per_sec": 0, 00:10:46.954 "rw_mbytes_per_sec": 0, 00:10:46.954 "r_mbytes_per_sec": 0, 00:10:46.954 "w_mbytes_per_sec": 0 00:10:46.954 }, 00:10:46.954 "claimed": true, 00:10:46.954 "claim_type": "exclusive_write", 00:10:46.954 "zoned": false, 00:10:46.954 "supported_io_types": { 00:10:46.954 "read": true, 00:10:46.954 "write": true, 00:10:46.954 "unmap": true, 00:10:46.954 "flush": true, 00:10:46.954 "reset": true, 00:10:46.954 "nvme_admin": false, 00:10:46.954 "nvme_io": false, 00:10:46.954 "nvme_io_md": false, 00:10:46.954 "write_zeroes": true, 00:10:46.954 "zcopy": true, 00:10:46.954 "get_zone_info": false, 00:10:46.954 "zone_management": false, 00:10:46.954 "zone_append": false, 00:10:46.954 "compare": false, 00:10:46.954 "compare_and_write": false, 00:10:46.954 "abort": true, 00:10:46.954 "seek_hole": false, 00:10:46.954 "seek_data": false, 00:10:46.954 "copy": true, 00:10:46.954 "nvme_iov_md": false 00:10:46.954 }, 00:10:46.954 "memory_domains": [ 00:10:46.954 { 00:10:46.954 "dma_device_id": "system", 00:10:46.954 "dma_device_type": 1 00:10:46.954 }, 00:10:46.954 { 00:10:46.954 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:46.954 "dma_device_type": 2 00:10:46.954 } 00:10:46.954 ], 00:10:46.954 "driver_specific": {} 00:10:46.954 } 00:10:46.954 ] 00:10:46.954 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.954 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:46.954 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:46.954 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:46.954 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:46.954 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:46.954 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:46.954 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:46.954 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:46.954 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:46.954 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:46.954 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:46.954 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:46.954 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:46.954 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.954 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:46.954 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:46.954 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:46.954 "name": "Existed_Raid", 00:10:46.954 "uuid": "0b0fe4bc-18ff-4b7a-a4ed-4a7f8bdf8aaf", 00:10:46.954 "strip_size_kb": 64, 00:10:46.954 "state": "configuring", 00:10:46.954 "raid_level": "concat", 00:10:46.954 "superblock": true, 00:10:46.954 "num_base_bdevs": 3, 00:10:46.954 "num_base_bdevs_discovered": 1, 00:10:46.954 "num_base_bdevs_operational": 3, 00:10:46.954 "base_bdevs_list": [ 00:10:46.954 { 00:10:46.954 "name": "BaseBdev1", 00:10:46.954 "uuid": "c3f9847c-e4a3-4819-93d9-5107671a07fa", 00:10:46.954 "is_configured": true, 00:10:46.954 "data_offset": 2048, 00:10:46.954 "data_size": 63488 00:10:46.954 }, 00:10:46.954 { 00:10:46.954 "name": "BaseBdev2", 00:10:46.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.954 "is_configured": false, 00:10:46.954 "data_offset": 0, 00:10:46.954 "data_size": 0 00:10:46.954 }, 00:10:46.954 { 00:10:46.954 "name": "BaseBdev3", 00:10:46.954 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:46.954 "is_configured": false, 00:10:46.954 "data_offset": 0, 00:10:46.954 "data_size": 0 00:10:46.954 } 00:10:46.954 ] 00:10:46.954 }' 00:10:46.954 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:46.954 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.521 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:47.521 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.521 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.521 [2024-11-27 14:10:24.600053] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:47.521 [2024-11-27 14:10:24.600117] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:10:47.521 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.521 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:47.521 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.521 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.521 [2024-11-27 14:10:24.612148] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:47.521 [2024-11-27 14:10:24.614572] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:47.521 [2024-11-27 14:10:24.614648] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:47.521 [2024-11-27 14:10:24.614665] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:47.522 [2024-11-27 14:10:24.614680] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:47.522 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.522 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:10:47.522 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:47.522 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:47.522 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:47.522 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:47.522 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:47.522 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:47.522 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:47.522 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:47.522 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:47.522 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:47.522 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:47.522 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:47.522 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.522 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:47.522 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.522 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:47.522 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:47.522 "name": "Existed_Raid", 00:10:47.522 "uuid": "ec658b46-4144-45a1-9c51-7e386d8455a2", 00:10:47.522 "strip_size_kb": 64, 00:10:47.522 "state": "configuring", 00:10:47.522 "raid_level": "concat", 00:10:47.522 "superblock": true, 00:10:47.522 "num_base_bdevs": 3, 00:10:47.522 "num_base_bdevs_discovered": 1, 00:10:47.522 "num_base_bdevs_operational": 3, 00:10:47.522 "base_bdevs_list": [ 00:10:47.522 { 00:10:47.522 "name": "BaseBdev1", 00:10:47.522 "uuid": "c3f9847c-e4a3-4819-93d9-5107671a07fa", 00:10:47.522 "is_configured": true, 00:10:47.522 "data_offset": 2048, 00:10:47.522 "data_size": 63488 00:10:47.522 }, 00:10:47.522 { 00:10:47.522 "name": "BaseBdev2", 00:10:47.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.522 "is_configured": false, 00:10:47.522 "data_offset": 0, 00:10:47.522 "data_size": 0 00:10:47.522 }, 00:10:47.522 { 00:10:47.522 "name": "BaseBdev3", 00:10:47.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.522 "is_configured": false, 00:10:47.522 "data_offset": 0, 00:10:47.522 "data_size": 0 00:10:47.522 } 00:10:47.522 ] 00:10:47.522 }' 00:10:47.522 14:10:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:47.522 14:10:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.088 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:48.088 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.088 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.088 [2024-11-27 14:10:25.195756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:48.088 BaseBdev2 00:10:48.088 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.088 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:10:48.088 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:48.088 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:48.088 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:48.088 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:48.088 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:48.088 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:48.088 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.088 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.088 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.088 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:48.088 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.088 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.088 [ 00:10:48.088 { 00:10:48.088 "name": "BaseBdev2", 00:10:48.088 "aliases": [ 00:10:48.088 "5ac36b38-de51-4a6c-8325-ef1b50d6e977" 00:10:48.088 ], 00:10:48.088 "product_name": "Malloc disk", 00:10:48.088 "block_size": 512, 00:10:48.088 "num_blocks": 65536, 00:10:48.088 "uuid": "5ac36b38-de51-4a6c-8325-ef1b50d6e977", 00:10:48.088 "assigned_rate_limits": { 00:10:48.088 "rw_ios_per_sec": 0, 00:10:48.088 "rw_mbytes_per_sec": 0, 00:10:48.088 "r_mbytes_per_sec": 0, 00:10:48.088 "w_mbytes_per_sec": 0 00:10:48.088 }, 00:10:48.088 "claimed": true, 00:10:48.088 "claim_type": "exclusive_write", 00:10:48.088 "zoned": false, 00:10:48.088 "supported_io_types": { 00:10:48.088 "read": true, 00:10:48.088 "write": true, 00:10:48.088 "unmap": true, 00:10:48.088 "flush": true, 00:10:48.088 "reset": true, 00:10:48.088 "nvme_admin": false, 00:10:48.088 "nvme_io": false, 00:10:48.088 "nvme_io_md": false, 00:10:48.088 "write_zeroes": true, 00:10:48.088 "zcopy": true, 00:10:48.088 "get_zone_info": false, 00:10:48.088 "zone_management": false, 00:10:48.088 "zone_append": false, 00:10:48.088 "compare": false, 00:10:48.088 "compare_and_write": false, 00:10:48.089 "abort": true, 00:10:48.089 "seek_hole": false, 00:10:48.089 "seek_data": false, 00:10:48.089 "copy": true, 00:10:48.089 "nvme_iov_md": false 00:10:48.089 }, 00:10:48.089 "memory_domains": [ 00:10:48.089 { 00:10:48.089 "dma_device_id": "system", 00:10:48.089 "dma_device_type": 1 00:10:48.089 }, 00:10:48.089 { 00:10:48.089 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.089 "dma_device_type": 2 00:10:48.089 } 00:10:48.089 ], 00:10:48.089 "driver_specific": {} 00:10:48.089 } 00:10:48.089 ] 00:10:48.089 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.089 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:48.089 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:48.089 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:48.089 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:48.089 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.089 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:48.089 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.089 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.089 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:48.089 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.089 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.089 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.089 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.089 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.089 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.089 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.089 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.089 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.089 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.089 "name": "Existed_Raid", 00:10:48.089 "uuid": "ec658b46-4144-45a1-9c51-7e386d8455a2", 00:10:48.089 "strip_size_kb": 64, 00:10:48.089 "state": "configuring", 00:10:48.089 "raid_level": "concat", 00:10:48.089 "superblock": true, 00:10:48.089 "num_base_bdevs": 3, 00:10:48.089 "num_base_bdevs_discovered": 2, 00:10:48.089 "num_base_bdevs_operational": 3, 00:10:48.089 "base_bdevs_list": [ 00:10:48.089 { 00:10:48.089 "name": "BaseBdev1", 00:10:48.089 "uuid": "c3f9847c-e4a3-4819-93d9-5107671a07fa", 00:10:48.089 "is_configured": true, 00:10:48.089 "data_offset": 2048, 00:10:48.089 "data_size": 63488 00:10:48.089 }, 00:10:48.089 { 00:10:48.089 "name": "BaseBdev2", 00:10:48.089 "uuid": "5ac36b38-de51-4a6c-8325-ef1b50d6e977", 00:10:48.089 "is_configured": true, 00:10:48.089 "data_offset": 2048, 00:10:48.089 "data_size": 63488 00:10:48.089 }, 00:10:48.089 { 00:10:48.089 "name": "BaseBdev3", 00:10:48.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:48.089 "is_configured": false, 00:10:48.089 "data_offset": 0, 00:10:48.089 "data_size": 0 00:10:48.089 } 00:10:48.089 ] 00:10:48.089 }' 00:10:48.089 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.089 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.655 [2024-11-27 14:10:25.814968] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:48.655 [2024-11-27 14:10:25.815551] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:10:48.655 [2024-11-27 14:10:25.815589] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:48.655 BaseBdev3 00:10:48.655 [2024-11-27 14:10:25.815951] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:48.655 [2024-11-27 14:10:25.816164] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:10:48.655 [2024-11-27 14:10:25.816188] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:10:48.655 [2024-11-27 14:10:25.816372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.655 [ 00:10:48.655 { 00:10:48.655 "name": "BaseBdev3", 00:10:48.655 "aliases": [ 00:10:48.655 "9883e2d8-dcaf-450c-99d9-3d959a66c12d" 00:10:48.655 ], 00:10:48.655 "product_name": "Malloc disk", 00:10:48.655 "block_size": 512, 00:10:48.655 "num_blocks": 65536, 00:10:48.655 "uuid": "9883e2d8-dcaf-450c-99d9-3d959a66c12d", 00:10:48.655 "assigned_rate_limits": { 00:10:48.655 "rw_ios_per_sec": 0, 00:10:48.655 "rw_mbytes_per_sec": 0, 00:10:48.655 "r_mbytes_per_sec": 0, 00:10:48.655 "w_mbytes_per_sec": 0 00:10:48.655 }, 00:10:48.655 "claimed": true, 00:10:48.655 "claim_type": "exclusive_write", 00:10:48.655 "zoned": false, 00:10:48.655 "supported_io_types": { 00:10:48.655 "read": true, 00:10:48.655 "write": true, 00:10:48.655 "unmap": true, 00:10:48.655 "flush": true, 00:10:48.655 "reset": true, 00:10:48.655 "nvme_admin": false, 00:10:48.655 "nvme_io": false, 00:10:48.655 "nvme_io_md": false, 00:10:48.655 "write_zeroes": true, 00:10:48.655 "zcopy": true, 00:10:48.655 "get_zone_info": false, 00:10:48.655 "zone_management": false, 00:10:48.655 "zone_append": false, 00:10:48.655 "compare": false, 00:10:48.655 "compare_and_write": false, 00:10:48.655 "abort": true, 00:10:48.655 "seek_hole": false, 00:10:48.655 "seek_data": false, 00:10:48.655 "copy": true, 00:10:48.655 "nvme_iov_md": false 00:10:48.655 }, 00:10:48.655 "memory_domains": [ 00:10:48.655 { 00:10:48.655 "dma_device_id": "system", 00:10:48.655 "dma_device_type": 1 00:10:48.655 }, 00:10:48.655 { 00:10:48.655 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:48.655 "dma_device_type": 2 00:10:48.655 } 00:10:48.655 ], 00:10:48.655 "driver_specific": {} 00:10:48.655 } 00:10:48.655 ] 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:48.655 "name": "Existed_Raid", 00:10:48.655 "uuid": "ec658b46-4144-45a1-9c51-7e386d8455a2", 00:10:48.655 "strip_size_kb": 64, 00:10:48.655 "state": "online", 00:10:48.655 "raid_level": "concat", 00:10:48.655 "superblock": true, 00:10:48.655 "num_base_bdevs": 3, 00:10:48.655 "num_base_bdevs_discovered": 3, 00:10:48.655 "num_base_bdevs_operational": 3, 00:10:48.655 "base_bdevs_list": [ 00:10:48.655 { 00:10:48.655 "name": "BaseBdev1", 00:10:48.655 "uuid": "c3f9847c-e4a3-4819-93d9-5107671a07fa", 00:10:48.655 "is_configured": true, 00:10:48.655 "data_offset": 2048, 00:10:48.655 "data_size": 63488 00:10:48.655 }, 00:10:48.655 { 00:10:48.655 "name": "BaseBdev2", 00:10:48.655 "uuid": "5ac36b38-de51-4a6c-8325-ef1b50d6e977", 00:10:48.655 "is_configured": true, 00:10:48.655 "data_offset": 2048, 00:10:48.655 "data_size": 63488 00:10:48.655 }, 00:10:48.655 { 00:10:48.655 "name": "BaseBdev3", 00:10:48.655 "uuid": "9883e2d8-dcaf-450c-99d9-3d959a66c12d", 00:10:48.655 "is_configured": true, 00:10:48.655 "data_offset": 2048, 00:10:48.655 "data_size": 63488 00:10:48.655 } 00:10:48.655 ] 00:10:48.655 }' 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:48.655 14:10:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.222 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:10:49.222 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:49.222 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:49.222 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:49.222 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:49.222 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:49.222 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:49.222 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:49.222 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.222 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.222 [2024-11-27 14:10:26.419586] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:49.222 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.222 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:49.222 "name": "Existed_Raid", 00:10:49.222 "aliases": [ 00:10:49.222 "ec658b46-4144-45a1-9c51-7e386d8455a2" 00:10:49.222 ], 00:10:49.222 "product_name": "Raid Volume", 00:10:49.222 "block_size": 512, 00:10:49.222 "num_blocks": 190464, 00:10:49.222 "uuid": "ec658b46-4144-45a1-9c51-7e386d8455a2", 00:10:49.222 "assigned_rate_limits": { 00:10:49.222 "rw_ios_per_sec": 0, 00:10:49.222 "rw_mbytes_per_sec": 0, 00:10:49.222 "r_mbytes_per_sec": 0, 00:10:49.222 "w_mbytes_per_sec": 0 00:10:49.222 }, 00:10:49.222 "claimed": false, 00:10:49.222 "zoned": false, 00:10:49.222 "supported_io_types": { 00:10:49.222 "read": true, 00:10:49.222 "write": true, 00:10:49.222 "unmap": true, 00:10:49.222 "flush": true, 00:10:49.222 "reset": true, 00:10:49.222 "nvme_admin": false, 00:10:49.222 "nvme_io": false, 00:10:49.222 "nvme_io_md": false, 00:10:49.222 "write_zeroes": true, 00:10:49.222 "zcopy": false, 00:10:49.222 "get_zone_info": false, 00:10:49.222 "zone_management": false, 00:10:49.222 "zone_append": false, 00:10:49.222 "compare": false, 00:10:49.222 "compare_and_write": false, 00:10:49.222 "abort": false, 00:10:49.222 "seek_hole": false, 00:10:49.222 "seek_data": false, 00:10:49.222 "copy": false, 00:10:49.222 "nvme_iov_md": false 00:10:49.222 }, 00:10:49.222 "memory_domains": [ 00:10:49.222 { 00:10:49.222 "dma_device_id": "system", 00:10:49.222 "dma_device_type": 1 00:10:49.222 }, 00:10:49.222 { 00:10:49.222 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.223 "dma_device_type": 2 00:10:49.223 }, 00:10:49.223 { 00:10:49.223 "dma_device_id": "system", 00:10:49.223 "dma_device_type": 1 00:10:49.223 }, 00:10:49.223 { 00:10:49.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.223 "dma_device_type": 2 00:10:49.223 }, 00:10:49.223 { 00:10:49.223 "dma_device_id": "system", 00:10:49.223 "dma_device_type": 1 00:10:49.223 }, 00:10:49.223 { 00:10:49.223 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.223 "dma_device_type": 2 00:10:49.223 } 00:10:49.223 ], 00:10:49.223 "driver_specific": { 00:10:49.223 "raid": { 00:10:49.223 "uuid": "ec658b46-4144-45a1-9c51-7e386d8455a2", 00:10:49.223 "strip_size_kb": 64, 00:10:49.223 "state": "online", 00:10:49.223 "raid_level": "concat", 00:10:49.223 "superblock": true, 00:10:49.223 "num_base_bdevs": 3, 00:10:49.223 "num_base_bdevs_discovered": 3, 00:10:49.223 "num_base_bdevs_operational": 3, 00:10:49.223 "base_bdevs_list": [ 00:10:49.223 { 00:10:49.223 "name": "BaseBdev1", 00:10:49.223 "uuid": "c3f9847c-e4a3-4819-93d9-5107671a07fa", 00:10:49.223 "is_configured": true, 00:10:49.223 "data_offset": 2048, 00:10:49.223 "data_size": 63488 00:10:49.223 }, 00:10:49.223 { 00:10:49.223 "name": "BaseBdev2", 00:10:49.223 "uuid": "5ac36b38-de51-4a6c-8325-ef1b50d6e977", 00:10:49.223 "is_configured": true, 00:10:49.223 "data_offset": 2048, 00:10:49.223 "data_size": 63488 00:10:49.223 }, 00:10:49.223 { 00:10:49.223 "name": "BaseBdev3", 00:10:49.223 "uuid": "9883e2d8-dcaf-450c-99d9-3d959a66c12d", 00:10:49.223 "is_configured": true, 00:10:49.223 "data_offset": 2048, 00:10:49.223 "data_size": 63488 00:10:49.223 } 00:10:49.223 ] 00:10:49.223 } 00:10:49.223 } 00:10:49.223 }' 00:10:49.223 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:49.481 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:10:49.481 BaseBdev2 00:10:49.481 BaseBdev3' 00:10:49.481 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.481 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:49.481 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.481 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:10:49.481 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.481 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.481 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.481 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.481 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.481 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.481 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.481 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:49.481 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.481 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.481 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.481 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.481 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.481 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.481 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:49.481 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:49.481 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:49.481 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.481 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.481 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.481 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:49.481 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:49.481 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:49.481 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.481 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.481 [2024-11-27 14:10:26.739396] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:49.481 [2024-11-27 14:10:26.739555] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:49.481 [2024-11-27 14:10:26.739655] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:49.739 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.739 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:10:49.739 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:10:49.739 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:10:49.739 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:10:49.739 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:10:49.739 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:49.739 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:49.739 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:10:49.739 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:49.739 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:49.739 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:10:49.739 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:49.739 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:49.739 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:49.739 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:49.739 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:49.739 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:49.739 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.739 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.739 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:49.739 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:49.739 "name": "Existed_Raid", 00:10:49.739 "uuid": "ec658b46-4144-45a1-9c51-7e386d8455a2", 00:10:49.739 "strip_size_kb": 64, 00:10:49.739 "state": "offline", 00:10:49.739 "raid_level": "concat", 00:10:49.739 "superblock": true, 00:10:49.739 "num_base_bdevs": 3, 00:10:49.739 "num_base_bdevs_discovered": 2, 00:10:49.739 "num_base_bdevs_operational": 2, 00:10:49.739 "base_bdevs_list": [ 00:10:49.739 { 00:10:49.739 "name": null, 00:10:49.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.739 "is_configured": false, 00:10:49.739 "data_offset": 0, 00:10:49.739 "data_size": 63488 00:10:49.739 }, 00:10:49.739 { 00:10:49.739 "name": "BaseBdev2", 00:10:49.739 "uuid": "5ac36b38-de51-4a6c-8325-ef1b50d6e977", 00:10:49.739 "is_configured": true, 00:10:49.739 "data_offset": 2048, 00:10:49.739 "data_size": 63488 00:10:49.739 }, 00:10:49.739 { 00:10:49.739 "name": "BaseBdev3", 00:10:49.739 "uuid": "9883e2d8-dcaf-450c-99d9-3d959a66c12d", 00:10:49.739 "is_configured": true, 00:10:49.739 "data_offset": 2048, 00:10:49.739 "data_size": 63488 00:10:49.739 } 00:10:49.739 ] 00:10:49.739 }' 00:10:49.739 14:10:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:49.739 14:10:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.305 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:10:50.305 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:50.305 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.305 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.305 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.305 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:50.305 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.305 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:50.305 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:50.305 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:10:50.305 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.305 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.305 [2024-11-27 14:10:27.490469] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:50.305 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.305 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:50.305 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.564 [2024-11-27 14:10:27.638243] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:50.564 [2024-11-27 14:10:27.638324] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.564 BaseBdev2 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.564 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.823 [ 00:10:50.823 { 00:10:50.823 "name": "BaseBdev2", 00:10:50.823 "aliases": [ 00:10:50.823 "7d986d24-1263-4a48-a22d-7360ea9d77f8" 00:10:50.823 ], 00:10:50.823 "product_name": "Malloc disk", 00:10:50.823 "block_size": 512, 00:10:50.823 "num_blocks": 65536, 00:10:50.823 "uuid": "7d986d24-1263-4a48-a22d-7360ea9d77f8", 00:10:50.823 "assigned_rate_limits": { 00:10:50.823 "rw_ios_per_sec": 0, 00:10:50.823 "rw_mbytes_per_sec": 0, 00:10:50.823 "r_mbytes_per_sec": 0, 00:10:50.823 "w_mbytes_per_sec": 0 00:10:50.823 }, 00:10:50.823 "claimed": false, 00:10:50.823 "zoned": false, 00:10:50.823 "supported_io_types": { 00:10:50.823 "read": true, 00:10:50.823 "write": true, 00:10:50.823 "unmap": true, 00:10:50.823 "flush": true, 00:10:50.823 "reset": true, 00:10:50.823 "nvme_admin": false, 00:10:50.823 "nvme_io": false, 00:10:50.823 "nvme_io_md": false, 00:10:50.823 "write_zeroes": true, 00:10:50.823 "zcopy": true, 00:10:50.823 "get_zone_info": false, 00:10:50.823 "zone_management": false, 00:10:50.823 "zone_append": false, 00:10:50.823 "compare": false, 00:10:50.823 "compare_and_write": false, 00:10:50.823 "abort": true, 00:10:50.823 "seek_hole": false, 00:10:50.823 "seek_data": false, 00:10:50.823 "copy": true, 00:10:50.823 "nvme_iov_md": false 00:10:50.823 }, 00:10:50.823 "memory_domains": [ 00:10:50.823 { 00:10:50.823 "dma_device_id": "system", 00:10:50.823 "dma_device_type": 1 00:10:50.823 }, 00:10:50.823 { 00:10:50.823 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.823 "dma_device_type": 2 00:10:50.823 } 00:10:50.823 ], 00:10:50.823 "driver_specific": {} 00:10:50.823 } 00:10:50.823 ] 00:10:50.823 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.823 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:50.823 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:50.823 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:50.823 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:10:50.823 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.823 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.823 BaseBdev3 00:10:50.823 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.823 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:10:50.823 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:10:50.823 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:50.823 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:50.823 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:50.823 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:50.823 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:50.823 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.823 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.823 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.823 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:50.823 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.823 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.823 [ 00:10:50.823 { 00:10:50.823 "name": "BaseBdev3", 00:10:50.824 "aliases": [ 00:10:50.824 "d4f36837-6dad-4f9c-b23b-115765b673fe" 00:10:50.824 ], 00:10:50.824 "product_name": "Malloc disk", 00:10:50.824 "block_size": 512, 00:10:50.824 "num_blocks": 65536, 00:10:50.824 "uuid": "d4f36837-6dad-4f9c-b23b-115765b673fe", 00:10:50.824 "assigned_rate_limits": { 00:10:50.824 "rw_ios_per_sec": 0, 00:10:50.824 "rw_mbytes_per_sec": 0, 00:10:50.824 "r_mbytes_per_sec": 0, 00:10:50.824 "w_mbytes_per_sec": 0 00:10:50.824 }, 00:10:50.824 "claimed": false, 00:10:50.824 "zoned": false, 00:10:50.824 "supported_io_types": { 00:10:50.824 "read": true, 00:10:50.824 "write": true, 00:10:50.824 "unmap": true, 00:10:50.824 "flush": true, 00:10:50.824 "reset": true, 00:10:50.824 "nvme_admin": false, 00:10:50.824 "nvme_io": false, 00:10:50.824 "nvme_io_md": false, 00:10:50.824 "write_zeroes": true, 00:10:50.824 "zcopy": true, 00:10:50.824 "get_zone_info": false, 00:10:50.824 "zone_management": false, 00:10:50.824 "zone_append": false, 00:10:50.824 "compare": false, 00:10:50.824 "compare_and_write": false, 00:10:50.824 "abort": true, 00:10:50.824 "seek_hole": false, 00:10:50.824 "seek_data": false, 00:10:50.824 "copy": true, 00:10:50.824 "nvme_iov_md": false 00:10:50.824 }, 00:10:50.824 "memory_domains": [ 00:10:50.824 { 00:10:50.824 "dma_device_id": "system", 00:10:50.824 "dma_device_type": 1 00:10:50.824 }, 00:10:50.824 { 00:10:50.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:50.824 "dma_device_type": 2 00:10:50.824 } 00:10:50.824 ], 00:10:50.824 "driver_specific": {} 00:10:50.824 } 00:10:50.824 ] 00:10:50.824 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.824 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:50.824 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:10:50.824 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:10:50.824 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:10:50.824 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.824 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.824 [2024-11-27 14:10:27.926441] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:50.824 [2024-11-27 14:10:27.926645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:50.824 [2024-11-27 14:10:27.926820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:50.824 [2024-11-27 14:10:27.929337] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:50.824 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.824 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:50.824 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:50.824 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:50.824 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:50.824 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:50.824 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:50.824 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:50.824 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:50.824 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:50.824 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:50.824 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:50.824 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:50.824 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.824 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.824 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:50.824 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:50.824 "name": "Existed_Raid", 00:10:50.824 "uuid": "c9cf0049-37c1-45f7-847b-3c0226b5d37a", 00:10:50.824 "strip_size_kb": 64, 00:10:50.824 "state": "configuring", 00:10:50.824 "raid_level": "concat", 00:10:50.824 "superblock": true, 00:10:50.824 "num_base_bdevs": 3, 00:10:50.824 "num_base_bdevs_discovered": 2, 00:10:50.824 "num_base_bdevs_operational": 3, 00:10:50.824 "base_bdevs_list": [ 00:10:50.824 { 00:10:50.824 "name": "BaseBdev1", 00:10:50.824 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.824 "is_configured": false, 00:10:50.824 "data_offset": 0, 00:10:50.824 "data_size": 0 00:10:50.824 }, 00:10:50.824 { 00:10:50.824 "name": "BaseBdev2", 00:10:50.824 "uuid": "7d986d24-1263-4a48-a22d-7360ea9d77f8", 00:10:50.824 "is_configured": true, 00:10:50.824 "data_offset": 2048, 00:10:50.824 "data_size": 63488 00:10:50.824 }, 00:10:50.824 { 00:10:50.824 "name": "BaseBdev3", 00:10:50.824 "uuid": "d4f36837-6dad-4f9c-b23b-115765b673fe", 00:10:50.824 "is_configured": true, 00:10:50.824 "data_offset": 2048, 00:10:50.824 "data_size": 63488 00:10:50.824 } 00:10:50.824 ] 00:10:50.824 }' 00:10:50.824 14:10:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:50.824 14:10:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.391 14:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:10:51.391 14:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.391 14:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.391 [2024-11-27 14:10:28.466597] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:51.391 14:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.391 14:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:51.391 14:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.391 14:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.391 14:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:51.391 14:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.391 14:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:51.392 14:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.392 14:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.392 14:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.392 14:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.392 14:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.392 14:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.392 14:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.392 14:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.392 14:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.392 14:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.392 "name": "Existed_Raid", 00:10:51.392 "uuid": "c9cf0049-37c1-45f7-847b-3c0226b5d37a", 00:10:51.392 "strip_size_kb": 64, 00:10:51.392 "state": "configuring", 00:10:51.392 "raid_level": "concat", 00:10:51.392 "superblock": true, 00:10:51.392 "num_base_bdevs": 3, 00:10:51.392 "num_base_bdevs_discovered": 1, 00:10:51.392 "num_base_bdevs_operational": 3, 00:10:51.392 "base_bdevs_list": [ 00:10:51.392 { 00:10:51.392 "name": "BaseBdev1", 00:10:51.392 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.392 "is_configured": false, 00:10:51.392 "data_offset": 0, 00:10:51.392 "data_size": 0 00:10:51.392 }, 00:10:51.392 { 00:10:51.392 "name": null, 00:10:51.392 "uuid": "7d986d24-1263-4a48-a22d-7360ea9d77f8", 00:10:51.392 "is_configured": false, 00:10:51.392 "data_offset": 0, 00:10:51.392 "data_size": 63488 00:10:51.392 }, 00:10:51.392 { 00:10:51.392 "name": "BaseBdev3", 00:10:51.392 "uuid": "d4f36837-6dad-4f9c-b23b-115765b673fe", 00:10:51.392 "is_configured": true, 00:10:51.392 "data_offset": 2048, 00:10:51.392 "data_size": 63488 00:10:51.392 } 00:10:51.392 ] 00:10:51.392 }' 00:10:51.392 14:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.392 14:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.960 14:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.960 14:10:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:51.960 14:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.960 14:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.960 14:10:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.960 [2024-11-27 14:10:29.044665] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:51.960 BaseBdev1 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.960 [ 00:10:51.960 { 00:10:51.960 "name": "BaseBdev1", 00:10:51.960 "aliases": [ 00:10:51.960 "ce788925-c1be-4e1f-a844-89ce08695853" 00:10:51.960 ], 00:10:51.960 "product_name": "Malloc disk", 00:10:51.960 "block_size": 512, 00:10:51.960 "num_blocks": 65536, 00:10:51.960 "uuid": "ce788925-c1be-4e1f-a844-89ce08695853", 00:10:51.960 "assigned_rate_limits": { 00:10:51.960 "rw_ios_per_sec": 0, 00:10:51.960 "rw_mbytes_per_sec": 0, 00:10:51.960 "r_mbytes_per_sec": 0, 00:10:51.960 "w_mbytes_per_sec": 0 00:10:51.960 }, 00:10:51.960 "claimed": true, 00:10:51.960 "claim_type": "exclusive_write", 00:10:51.960 "zoned": false, 00:10:51.960 "supported_io_types": { 00:10:51.960 "read": true, 00:10:51.960 "write": true, 00:10:51.960 "unmap": true, 00:10:51.960 "flush": true, 00:10:51.960 "reset": true, 00:10:51.960 "nvme_admin": false, 00:10:51.960 "nvme_io": false, 00:10:51.960 "nvme_io_md": false, 00:10:51.960 "write_zeroes": true, 00:10:51.960 "zcopy": true, 00:10:51.960 "get_zone_info": false, 00:10:51.960 "zone_management": false, 00:10:51.960 "zone_append": false, 00:10:51.960 "compare": false, 00:10:51.960 "compare_and_write": false, 00:10:51.960 "abort": true, 00:10:51.960 "seek_hole": false, 00:10:51.960 "seek_data": false, 00:10:51.960 "copy": true, 00:10:51.960 "nvme_iov_md": false 00:10:51.960 }, 00:10:51.960 "memory_domains": [ 00:10:51.960 { 00:10:51.960 "dma_device_id": "system", 00:10:51.960 "dma_device_type": 1 00:10:51.960 }, 00:10:51.960 { 00:10:51.960 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.960 "dma_device_type": 2 00:10:51.960 } 00:10:51.960 ], 00:10:51.960 "driver_specific": {} 00:10:51.960 } 00:10:51.960 ] 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:51.960 "name": "Existed_Raid", 00:10:51.960 "uuid": "c9cf0049-37c1-45f7-847b-3c0226b5d37a", 00:10:51.960 "strip_size_kb": 64, 00:10:51.960 "state": "configuring", 00:10:51.960 "raid_level": "concat", 00:10:51.960 "superblock": true, 00:10:51.960 "num_base_bdevs": 3, 00:10:51.960 "num_base_bdevs_discovered": 2, 00:10:51.960 "num_base_bdevs_operational": 3, 00:10:51.960 "base_bdevs_list": [ 00:10:51.960 { 00:10:51.960 "name": "BaseBdev1", 00:10:51.960 "uuid": "ce788925-c1be-4e1f-a844-89ce08695853", 00:10:51.960 "is_configured": true, 00:10:51.960 "data_offset": 2048, 00:10:51.960 "data_size": 63488 00:10:51.960 }, 00:10:51.960 { 00:10:51.960 "name": null, 00:10:51.960 "uuid": "7d986d24-1263-4a48-a22d-7360ea9d77f8", 00:10:51.960 "is_configured": false, 00:10:51.960 "data_offset": 0, 00:10:51.960 "data_size": 63488 00:10:51.960 }, 00:10:51.960 { 00:10:51.960 "name": "BaseBdev3", 00:10:51.960 "uuid": "d4f36837-6dad-4f9c-b23b-115765b673fe", 00:10:51.960 "is_configured": true, 00:10:51.960 "data_offset": 2048, 00:10:51.960 "data_size": 63488 00:10:51.960 } 00:10:51.960 ] 00:10:51.960 }' 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:51.960 14:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.528 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:52.528 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.528 14:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.528 14:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.528 14:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.528 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:10:52.528 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:10:52.528 14:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.528 14:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.528 [2024-11-27 14:10:29.640970] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:52.528 14:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.528 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:52.528 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:52.528 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:52.528 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:52.528 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:52.528 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:52.528 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:52.529 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:52.529 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:52.529 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:52.529 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:52.529 14:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.529 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:52.529 14:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.529 14:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.529 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:52.529 "name": "Existed_Raid", 00:10:52.529 "uuid": "c9cf0049-37c1-45f7-847b-3c0226b5d37a", 00:10:52.529 "strip_size_kb": 64, 00:10:52.529 "state": "configuring", 00:10:52.529 "raid_level": "concat", 00:10:52.529 "superblock": true, 00:10:52.529 "num_base_bdevs": 3, 00:10:52.529 "num_base_bdevs_discovered": 1, 00:10:52.529 "num_base_bdevs_operational": 3, 00:10:52.529 "base_bdevs_list": [ 00:10:52.529 { 00:10:52.529 "name": "BaseBdev1", 00:10:52.529 "uuid": "ce788925-c1be-4e1f-a844-89ce08695853", 00:10:52.529 "is_configured": true, 00:10:52.529 "data_offset": 2048, 00:10:52.529 "data_size": 63488 00:10:52.529 }, 00:10:52.529 { 00:10:52.529 "name": null, 00:10:52.529 "uuid": "7d986d24-1263-4a48-a22d-7360ea9d77f8", 00:10:52.529 "is_configured": false, 00:10:52.529 "data_offset": 0, 00:10:52.529 "data_size": 63488 00:10:52.529 }, 00:10:52.529 { 00:10:52.529 "name": null, 00:10:52.529 "uuid": "d4f36837-6dad-4f9c-b23b-115765b673fe", 00:10:52.529 "is_configured": false, 00:10:52.529 "data_offset": 0, 00:10:52.529 "data_size": 63488 00:10:52.529 } 00:10:52.529 ] 00:10:52.529 }' 00:10:52.529 14:10:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:52.529 14:10:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.096 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.096 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:53.097 14:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.097 14:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.097 14:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.097 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:10:53.097 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:53.097 14:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.097 14:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.097 [2024-11-27 14:10:30.189162] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:53.097 14:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.097 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:53.097 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.097 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.097 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.097 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.097 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.097 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.097 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.097 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.097 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.097 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.097 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.097 14:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.097 14:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.097 14:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.097 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.097 "name": "Existed_Raid", 00:10:53.097 "uuid": "c9cf0049-37c1-45f7-847b-3c0226b5d37a", 00:10:53.097 "strip_size_kb": 64, 00:10:53.097 "state": "configuring", 00:10:53.097 "raid_level": "concat", 00:10:53.097 "superblock": true, 00:10:53.097 "num_base_bdevs": 3, 00:10:53.097 "num_base_bdevs_discovered": 2, 00:10:53.097 "num_base_bdevs_operational": 3, 00:10:53.097 "base_bdevs_list": [ 00:10:53.097 { 00:10:53.097 "name": "BaseBdev1", 00:10:53.097 "uuid": "ce788925-c1be-4e1f-a844-89ce08695853", 00:10:53.097 "is_configured": true, 00:10:53.097 "data_offset": 2048, 00:10:53.097 "data_size": 63488 00:10:53.097 }, 00:10:53.097 { 00:10:53.097 "name": null, 00:10:53.097 "uuid": "7d986d24-1263-4a48-a22d-7360ea9d77f8", 00:10:53.097 "is_configured": false, 00:10:53.097 "data_offset": 0, 00:10:53.097 "data_size": 63488 00:10:53.097 }, 00:10:53.097 { 00:10:53.097 "name": "BaseBdev3", 00:10:53.097 "uuid": "d4f36837-6dad-4f9c-b23b-115765b673fe", 00:10:53.097 "is_configured": true, 00:10:53.097 "data_offset": 2048, 00:10:53.097 "data_size": 63488 00:10:53.097 } 00:10:53.097 ] 00:10:53.097 }' 00:10:53.097 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.097 14:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.665 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.665 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:53.665 14:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.665 14:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.665 14:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.665 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:10:53.665 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:10:53.665 14:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.665 14:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.665 [2024-11-27 14:10:30.761379] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:53.665 14:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.665 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:53.665 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:53.665 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:53.665 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:53.665 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:53.665 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:53.665 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:53.665 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:53.665 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:53.665 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:53.665 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:53.665 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.665 14:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.665 14:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.665 14:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.665 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:53.665 "name": "Existed_Raid", 00:10:53.665 "uuid": "c9cf0049-37c1-45f7-847b-3c0226b5d37a", 00:10:53.665 "strip_size_kb": 64, 00:10:53.665 "state": "configuring", 00:10:53.665 "raid_level": "concat", 00:10:53.665 "superblock": true, 00:10:53.665 "num_base_bdevs": 3, 00:10:53.665 "num_base_bdevs_discovered": 1, 00:10:53.665 "num_base_bdevs_operational": 3, 00:10:53.665 "base_bdevs_list": [ 00:10:53.665 { 00:10:53.665 "name": null, 00:10:53.665 "uuid": "ce788925-c1be-4e1f-a844-89ce08695853", 00:10:53.665 "is_configured": false, 00:10:53.665 "data_offset": 0, 00:10:53.665 "data_size": 63488 00:10:53.665 }, 00:10:53.665 { 00:10:53.665 "name": null, 00:10:53.665 "uuid": "7d986d24-1263-4a48-a22d-7360ea9d77f8", 00:10:53.665 "is_configured": false, 00:10:53.665 "data_offset": 0, 00:10:53.665 "data_size": 63488 00:10:53.665 }, 00:10:53.665 { 00:10:53.665 "name": "BaseBdev3", 00:10:53.665 "uuid": "d4f36837-6dad-4f9c-b23b-115765b673fe", 00:10:53.665 "is_configured": true, 00:10:53.665 "data_offset": 2048, 00:10:53.665 "data_size": 63488 00:10:53.665 } 00:10:53.665 ] 00:10:53.665 }' 00:10:53.665 14:10:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:53.665 14:10:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.235 14:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.235 14:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:54.235 14:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.235 14:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.235 14:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.235 14:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:10:54.235 14:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:54.235 14:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.235 14:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.235 [2024-11-27 14:10:31.436547] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:54.235 14:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.235 14:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:54.235 14:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:54.235 14:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:54.235 14:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:54.235 14:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:54.235 14:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:54.235 14:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:54.235 14:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:54.235 14:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:54.235 14:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:54.235 14:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.235 14:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.235 14:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.235 14:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:54.235 14:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.235 14:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:54.235 "name": "Existed_Raid", 00:10:54.235 "uuid": "c9cf0049-37c1-45f7-847b-3c0226b5d37a", 00:10:54.235 "strip_size_kb": 64, 00:10:54.235 "state": "configuring", 00:10:54.235 "raid_level": "concat", 00:10:54.235 "superblock": true, 00:10:54.235 "num_base_bdevs": 3, 00:10:54.235 "num_base_bdevs_discovered": 2, 00:10:54.235 "num_base_bdevs_operational": 3, 00:10:54.235 "base_bdevs_list": [ 00:10:54.235 { 00:10:54.235 "name": null, 00:10:54.235 "uuid": "ce788925-c1be-4e1f-a844-89ce08695853", 00:10:54.235 "is_configured": false, 00:10:54.235 "data_offset": 0, 00:10:54.235 "data_size": 63488 00:10:54.235 }, 00:10:54.235 { 00:10:54.235 "name": "BaseBdev2", 00:10:54.235 "uuid": "7d986d24-1263-4a48-a22d-7360ea9d77f8", 00:10:54.235 "is_configured": true, 00:10:54.235 "data_offset": 2048, 00:10:54.235 "data_size": 63488 00:10:54.235 }, 00:10:54.235 { 00:10:54.235 "name": "BaseBdev3", 00:10:54.235 "uuid": "d4f36837-6dad-4f9c-b23b-115765b673fe", 00:10:54.235 "is_configured": true, 00:10:54.235 "data_offset": 2048, 00:10:54.235 "data_size": 63488 00:10:54.235 } 00:10:54.235 ] 00:10:54.235 }' 00:10:54.235 14:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:54.235 14:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.802 14:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:54.802 14:10:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.802 14:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.802 14:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.802 14:10:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.802 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:10:54.802 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:54.802 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:54.803 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.803 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:54.803 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.803 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ce788925-c1be-4e1f-a844-89ce08695853 00:10:54.803 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.803 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.062 [2024-11-27 14:10:32.095339] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:55.062 [2024-11-27 14:10:32.095906] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:10:55.062 [2024-11-27 14:10:32.095948] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:55.062 NewBaseBdev 00:10:55.062 [2024-11-27 14:10:32.096251] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:10:55.062 [2024-11-27 14:10:32.096432] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:10:55.062 [2024-11-27 14:10:32.096459] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:10:55.062 [2024-11-27 14:10:32.096625] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:55.062 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.062 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:10:55.062 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:10:55.062 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:55.062 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:10:55.062 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:55.062 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:55.062 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:10:55.062 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.062 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.062 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.062 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:55.062 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.062 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.062 [ 00:10:55.062 { 00:10:55.062 "name": "NewBaseBdev", 00:10:55.062 "aliases": [ 00:10:55.062 "ce788925-c1be-4e1f-a844-89ce08695853" 00:10:55.062 ], 00:10:55.062 "product_name": "Malloc disk", 00:10:55.062 "block_size": 512, 00:10:55.062 "num_blocks": 65536, 00:10:55.062 "uuid": "ce788925-c1be-4e1f-a844-89ce08695853", 00:10:55.062 "assigned_rate_limits": { 00:10:55.062 "rw_ios_per_sec": 0, 00:10:55.062 "rw_mbytes_per_sec": 0, 00:10:55.062 "r_mbytes_per_sec": 0, 00:10:55.062 "w_mbytes_per_sec": 0 00:10:55.062 }, 00:10:55.062 "claimed": true, 00:10:55.062 "claim_type": "exclusive_write", 00:10:55.062 "zoned": false, 00:10:55.062 "supported_io_types": { 00:10:55.062 "read": true, 00:10:55.062 "write": true, 00:10:55.062 "unmap": true, 00:10:55.062 "flush": true, 00:10:55.062 "reset": true, 00:10:55.062 "nvme_admin": false, 00:10:55.062 "nvme_io": false, 00:10:55.062 "nvme_io_md": false, 00:10:55.062 "write_zeroes": true, 00:10:55.062 "zcopy": true, 00:10:55.062 "get_zone_info": false, 00:10:55.062 "zone_management": false, 00:10:55.062 "zone_append": false, 00:10:55.062 "compare": false, 00:10:55.062 "compare_and_write": false, 00:10:55.062 "abort": true, 00:10:55.062 "seek_hole": false, 00:10:55.062 "seek_data": false, 00:10:55.062 "copy": true, 00:10:55.062 "nvme_iov_md": false 00:10:55.062 }, 00:10:55.062 "memory_domains": [ 00:10:55.062 { 00:10:55.062 "dma_device_id": "system", 00:10:55.062 "dma_device_type": 1 00:10:55.062 }, 00:10:55.062 { 00:10:55.062 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.062 "dma_device_type": 2 00:10:55.062 } 00:10:55.062 ], 00:10:55.062 "driver_specific": {} 00:10:55.062 } 00:10:55.062 ] 00:10:55.062 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.062 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:10:55.062 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:55.062 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:10:55.062 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:55.062 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:55.062 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:55.062 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:55.062 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:55.062 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:55.062 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:55.062 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:55.062 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:55.062 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.062 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.062 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.062 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.062 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:55.062 "name": "Existed_Raid", 00:10:55.062 "uuid": "c9cf0049-37c1-45f7-847b-3c0226b5d37a", 00:10:55.062 "strip_size_kb": 64, 00:10:55.062 "state": "online", 00:10:55.062 "raid_level": "concat", 00:10:55.062 "superblock": true, 00:10:55.062 "num_base_bdevs": 3, 00:10:55.062 "num_base_bdevs_discovered": 3, 00:10:55.062 "num_base_bdevs_operational": 3, 00:10:55.062 "base_bdevs_list": [ 00:10:55.062 { 00:10:55.062 "name": "NewBaseBdev", 00:10:55.062 "uuid": "ce788925-c1be-4e1f-a844-89ce08695853", 00:10:55.062 "is_configured": true, 00:10:55.062 "data_offset": 2048, 00:10:55.062 "data_size": 63488 00:10:55.062 }, 00:10:55.062 { 00:10:55.062 "name": "BaseBdev2", 00:10:55.062 "uuid": "7d986d24-1263-4a48-a22d-7360ea9d77f8", 00:10:55.062 "is_configured": true, 00:10:55.062 "data_offset": 2048, 00:10:55.062 "data_size": 63488 00:10:55.062 }, 00:10:55.062 { 00:10:55.062 "name": "BaseBdev3", 00:10:55.062 "uuid": "d4f36837-6dad-4f9c-b23b-115765b673fe", 00:10:55.062 "is_configured": true, 00:10:55.062 "data_offset": 2048, 00:10:55.062 "data_size": 63488 00:10:55.062 } 00:10:55.062 ] 00:10:55.062 }' 00:10:55.062 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:55.062 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.630 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:10:55.630 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:10:55.630 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:55.630 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:55.630 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:10:55.630 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:55.630 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:10:55.630 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.630 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.630 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:55.630 [2024-11-27 14:10:32.619919] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:55.630 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.630 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:55.630 "name": "Existed_Raid", 00:10:55.630 "aliases": [ 00:10:55.630 "c9cf0049-37c1-45f7-847b-3c0226b5d37a" 00:10:55.630 ], 00:10:55.630 "product_name": "Raid Volume", 00:10:55.630 "block_size": 512, 00:10:55.631 "num_blocks": 190464, 00:10:55.631 "uuid": "c9cf0049-37c1-45f7-847b-3c0226b5d37a", 00:10:55.631 "assigned_rate_limits": { 00:10:55.631 "rw_ios_per_sec": 0, 00:10:55.631 "rw_mbytes_per_sec": 0, 00:10:55.631 "r_mbytes_per_sec": 0, 00:10:55.631 "w_mbytes_per_sec": 0 00:10:55.631 }, 00:10:55.631 "claimed": false, 00:10:55.631 "zoned": false, 00:10:55.631 "supported_io_types": { 00:10:55.631 "read": true, 00:10:55.631 "write": true, 00:10:55.631 "unmap": true, 00:10:55.631 "flush": true, 00:10:55.631 "reset": true, 00:10:55.631 "nvme_admin": false, 00:10:55.631 "nvme_io": false, 00:10:55.631 "nvme_io_md": false, 00:10:55.631 "write_zeroes": true, 00:10:55.631 "zcopy": false, 00:10:55.631 "get_zone_info": false, 00:10:55.631 "zone_management": false, 00:10:55.631 "zone_append": false, 00:10:55.631 "compare": false, 00:10:55.631 "compare_and_write": false, 00:10:55.631 "abort": false, 00:10:55.631 "seek_hole": false, 00:10:55.631 "seek_data": false, 00:10:55.631 "copy": false, 00:10:55.631 "nvme_iov_md": false 00:10:55.631 }, 00:10:55.631 "memory_domains": [ 00:10:55.631 { 00:10:55.631 "dma_device_id": "system", 00:10:55.631 "dma_device_type": 1 00:10:55.631 }, 00:10:55.631 { 00:10:55.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.631 "dma_device_type": 2 00:10:55.631 }, 00:10:55.631 { 00:10:55.631 "dma_device_id": "system", 00:10:55.631 "dma_device_type": 1 00:10:55.631 }, 00:10:55.631 { 00:10:55.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.631 "dma_device_type": 2 00:10:55.631 }, 00:10:55.631 { 00:10:55.631 "dma_device_id": "system", 00:10:55.631 "dma_device_type": 1 00:10:55.631 }, 00:10:55.631 { 00:10:55.631 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:55.631 "dma_device_type": 2 00:10:55.631 } 00:10:55.631 ], 00:10:55.631 "driver_specific": { 00:10:55.631 "raid": { 00:10:55.631 "uuid": "c9cf0049-37c1-45f7-847b-3c0226b5d37a", 00:10:55.631 "strip_size_kb": 64, 00:10:55.631 "state": "online", 00:10:55.631 "raid_level": "concat", 00:10:55.631 "superblock": true, 00:10:55.631 "num_base_bdevs": 3, 00:10:55.631 "num_base_bdevs_discovered": 3, 00:10:55.631 "num_base_bdevs_operational": 3, 00:10:55.631 "base_bdevs_list": [ 00:10:55.631 { 00:10:55.631 "name": "NewBaseBdev", 00:10:55.631 "uuid": "ce788925-c1be-4e1f-a844-89ce08695853", 00:10:55.631 "is_configured": true, 00:10:55.631 "data_offset": 2048, 00:10:55.631 "data_size": 63488 00:10:55.631 }, 00:10:55.631 { 00:10:55.631 "name": "BaseBdev2", 00:10:55.631 "uuid": "7d986d24-1263-4a48-a22d-7360ea9d77f8", 00:10:55.631 "is_configured": true, 00:10:55.631 "data_offset": 2048, 00:10:55.631 "data_size": 63488 00:10:55.631 }, 00:10:55.631 { 00:10:55.631 "name": "BaseBdev3", 00:10:55.631 "uuid": "d4f36837-6dad-4f9c-b23b-115765b673fe", 00:10:55.631 "is_configured": true, 00:10:55.631 "data_offset": 2048, 00:10:55.631 "data_size": 63488 00:10:55.631 } 00:10:55.631 ] 00:10:55.631 } 00:10:55.631 } 00:10:55.631 }' 00:10:55.631 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:55.631 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:10:55.631 BaseBdev2 00:10:55.631 BaseBdev3' 00:10:55.631 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.631 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:55.631 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.631 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:10:55.631 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.631 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.631 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.631 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.631 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.631 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.631 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.631 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.631 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:10:55.631 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.631 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.631 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.631 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.631 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.631 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:55.631 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:10:55.631 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:55.631 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.631 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.631 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.890 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:55.890 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:55.890 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:10:55.890 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:55.890 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.890 [2024-11-27 14:10:32.935622] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:55.890 [2024-11-27 14:10:32.935841] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:55.890 [2024-11-27 14:10:32.935965] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:55.890 [2024-11-27 14:10:32.936040] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:55.890 [2024-11-27 14:10:32.936061] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:10:55.891 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:55.891 14:10:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 66162 00:10:55.891 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 66162 ']' 00:10:55.891 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 66162 00:10:55.891 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:10:55.891 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:55.891 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66162 00:10:55.891 killing process with pid 66162 00:10:55.891 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:55.891 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:55.891 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66162' 00:10:55.891 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 66162 00:10:55.891 [2024-11-27 14:10:32.974523] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:55.891 14:10:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 66162 00:10:56.157 [2024-11-27 14:10:33.244827] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:57.110 14:10:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:10:57.110 00:10:57.110 real 0m11.966s 00:10:57.110 user 0m19.904s 00:10:57.110 sys 0m1.610s 00:10:57.110 ************************************ 00:10:57.110 END TEST raid_state_function_test_sb 00:10:57.110 ************************************ 00:10:57.110 14:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:57.110 14:10:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:57.110 14:10:34 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:10:57.110 14:10:34 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:57.110 14:10:34 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:57.110 14:10:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:57.110 ************************************ 00:10:57.110 START TEST raid_superblock_test 00:10:57.110 ************************************ 00:10:57.110 14:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 3 00:10:57.110 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:10:57.110 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:10:57.110 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:10:57.110 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:10:57.110 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:10:57.110 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:10:57.110 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:10:57.110 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:10:57.110 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:10:57.110 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:10:57.110 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:10:57.110 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:10:57.110 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:10:57.110 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:10:57.110 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:10:57.110 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:10:57.110 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=66798 00:10:57.110 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:10:57.110 14:10:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 66798 00:10:57.110 14:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 66798 ']' 00:10:57.110 14:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:57.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:57.110 14:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:57.110 14:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:57.110 14:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:57.110 14:10:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:57.369 [2024-11-27 14:10:34.466848] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:10:57.369 [2024-11-27 14:10:34.467830] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid66798 ] 00:10:57.627 [2024-11-27 14:10:34.656120] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:57.627 [2024-11-27 14:10:34.810154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:57.886 [2024-11-27 14:10:35.017819] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:57.886 [2024-11-27 14:10:35.017889] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:58.452 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:58.452 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:10:58.452 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:10:58.452 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:58.452 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:10:58.452 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.453 malloc1 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.453 [2024-11-27 14:10:35.487991] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:58.453 [2024-11-27 14:10:35.488208] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.453 [2024-11-27 14:10:35.488288] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:10:58.453 [2024-11-27 14:10:35.488560] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.453 [2024-11-27 14:10:35.491334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.453 [2024-11-27 14:10:35.491499] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:58.453 pt1 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.453 malloc2 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.453 [2024-11-27 14:10:35.540105] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:58.453 [2024-11-27 14:10:35.540187] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.453 [2024-11-27 14:10:35.540225] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:10:58.453 [2024-11-27 14:10:35.540242] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.453 [2024-11-27 14:10:35.543044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.453 [2024-11-27 14:10:35.543248] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:58.453 pt2 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.453 malloc3 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.453 [2024-11-27 14:10:35.606347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:58.453 [2024-11-27 14:10:35.606413] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:58.453 [2024-11-27 14:10:35.606450] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:10:58.453 [2024-11-27 14:10:35.606467] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:58.453 [2024-11-27 14:10:35.609189] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:58.453 [2024-11-27 14:10:35.609353] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:58.453 pt3 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.453 [2024-11-27 14:10:35.618389] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:58.453 [2024-11-27 14:10:35.620824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:58.453 [2024-11-27 14:10:35.620921] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:58.453 [2024-11-27 14:10:35.621123] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:10:58.453 [2024-11-27 14:10:35.621148] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:58.453 [2024-11-27 14:10:35.621453] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:10:58.453 [2024-11-27 14:10:35.621658] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:10:58.453 [2024-11-27 14:10:35.621674] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:10:58.453 [2024-11-27 14:10:35.621872] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:58.453 "name": "raid_bdev1", 00:10:58.453 "uuid": "7f2e24f7-8676-49c2-af92-934826d116e4", 00:10:58.453 "strip_size_kb": 64, 00:10:58.453 "state": "online", 00:10:58.453 "raid_level": "concat", 00:10:58.453 "superblock": true, 00:10:58.453 "num_base_bdevs": 3, 00:10:58.453 "num_base_bdevs_discovered": 3, 00:10:58.453 "num_base_bdevs_operational": 3, 00:10:58.453 "base_bdevs_list": [ 00:10:58.453 { 00:10:58.453 "name": "pt1", 00:10:58.453 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:58.453 "is_configured": true, 00:10:58.453 "data_offset": 2048, 00:10:58.453 "data_size": 63488 00:10:58.453 }, 00:10:58.453 { 00:10:58.453 "name": "pt2", 00:10:58.453 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:58.453 "is_configured": true, 00:10:58.453 "data_offset": 2048, 00:10:58.453 "data_size": 63488 00:10:58.453 }, 00:10:58.453 { 00:10:58.453 "name": "pt3", 00:10:58.453 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:58.453 "is_configured": true, 00:10:58.453 "data_offset": 2048, 00:10:58.453 "data_size": 63488 00:10:58.453 } 00:10:58.453 ] 00:10:58.453 }' 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:58.453 14:10:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.020 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:10:59.020 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:10:59.020 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:10:59.020 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:10:59.020 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:10:59.020 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:10:59.020 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:10:59.020 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:59.020 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.020 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.020 [2024-11-27 14:10:36.066901] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:59.020 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.020 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:10:59.020 "name": "raid_bdev1", 00:10:59.020 "aliases": [ 00:10:59.020 "7f2e24f7-8676-49c2-af92-934826d116e4" 00:10:59.020 ], 00:10:59.020 "product_name": "Raid Volume", 00:10:59.020 "block_size": 512, 00:10:59.020 "num_blocks": 190464, 00:10:59.020 "uuid": "7f2e24f7-8676-49c2-af92-934826d116e4", 00:10:59.020 "assigned_rate_limits": { 00:10:59.020 "rw_ios_per_sec": 0, 00:10:59.020 "rw_mbytes_per_sec": 0, 00:10:59.020 "r_mbytes_per_sec": 0, 00:10:59.020 "w_mbytes_per_sec": 0 00:10:59.020 }, 00:10:59.020 "claimed": false, 00:10:59.020 "zoned": false, 00:10:59.020 "supported_io_types": { 00:10:59.020 "read": true, 00:10:59.020 "write": true, 00:10:59.020 "unmap": true, 00:10:59.020 "flush": true, 00:10:59.020 "reset": true, 00:10:59.020 "nvme_admin": false, 00:10:59.020 "nvme_io": false, 00:10:59.020 "nvme_io_md": false, 00:10:59.020 "write_zeroes": true, 00:10:59.020 "zcopy": false, 00:10:59.020 "get_zone_info": false, 00:10:59.020 "zone_management": false, 00:10:59.020 "zone_append": false, 00:10:59.020 "compare": false, 00:10:59.020 "compare_and_write": false, 00:10:59.020 "abort": false, 00:10:59.020 "seek_hole": false, 00:10:59.020 "seek_data": false, 00:10:59.020 "copy": false, 00:10:59.020 "nvme_iov_md": false 00:10:59.020 }, 00:10:59.020 "memory_domains": [ 00:10:59.020 { 00:10:59.020 "dma_device_id": "system", 00:10:59.020 "dma_device_type": 1 00:10:59.020 }, 00:10:59.020 { 00:10:59.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.020 "dma_device_type": 2 00:10:59.020 }, 00:10:59.020 { 00:10:59.020 "dma_device_id": "system", 00:10:59.020 "dma_device_type": 1 00:10:59.020 }, 00:10:59.020 { 00:10:59.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.020 "dma_device_type": 2 00:10:59.020 }, 00:10:59.020 { 00:10:59.020 "dma_device_id": "system", 00:10:59.020 "dma_device_type": 1 00:10:59.020 }, 00:10:59.020 { 00:10:59.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:59.020 "dma_device_type": 2 00:10:59.020 } 00:10:59.020 ], 00:10:59.020 "driver_specific": { 00:10:59.020 "raid": { 00:10:59.020 "uuid": "7f2e24f7-8676-49c2-af92-934826d116e4", 00:10:59.020 "strip_size_kb": 64, 00:10:59.020 "state": "online", 00:10:59.020 "raid_level": "concat", 00:10:59.020 "superblock": true, 00:10:59.020 "num_base_bdevs": 3, 00:10:59.020 "num_base_bdevs_discovered": 3, 00:10:59.020 "num_base_bdevs_operational": 3, 00:10:59.020 "base_bdevs_list": [ 00:10:59.020 { 00:10:59.020 "name": "pt1", 00:10:59.020 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:59.020 "is_configured": true, 00:10:59.020 "data_offset": 2048, 00:10:59.020 "data_size": 63488 00:10:59.020 }, 00:10:59.020 { 00:10:59.020 "name": "pt2", 00:10:59.020 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:59.021 "is_configured": true, 00:10:59.021 "data_offset": 2048, 00:10:59.021 "data_size": 63488 00:10:59.021 }, 00:10:59.021 { 00:10:59.021 "name": "pt3", 00:10:59.021 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:59.021 "is_configured": true, 00:10:59.021 "data_offset": 2048, 00:10:59.021 "data_size": 63488 00:10:59.021 } 00:10:59.021 ] 00:10:59.021 } 00:10:59.021 } 00:10:59.021 }' 00:10:59.021 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:59.021 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:10:59.021 pt2 00:10:59.021 pt3' 00:10:59.021 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.021 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:10:59.021 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.021 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:10:59.021 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.021 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.021 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.021 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.021 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.021 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.021 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.021 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:10:59.021 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.021 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.021 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.021 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.279 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.279 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.279 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:10:59.279 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:10:59.279 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:10:59.279 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.279 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.279 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.279 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:10:59.279 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:10:59.279 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:10:59.279 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.279 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:10:59.279 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.279 [2024-11-27 14:10:36.354845] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:59.279 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.279 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=7f2e24f7-8676-49c2-af92-934826d116e4 00:10:59.279 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 7f2e24f7-8676-49c2-af92-934826d116e4 ']' 00:10:59.279 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:10:59.279 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.279 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.279 [2024-11-27 14:10:36.402513] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:59.279 [2024-11-27 14:10:36.402670] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:59.279 [2024-11-27 14:10:36.402786] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:59.279 [2024-11-27 14:10:36.402872] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:59.279 [2024-11-27 14:10:36.402889] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:10:59.279 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.279 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:10:59.279 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.279 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.279 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.279 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.279 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:10:59.279 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:10:59.279 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:59.279 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:10:59.279 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.280 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.280 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.280 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:59.280 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:10:59.280 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.280 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.280 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.280 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:10:59.280 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:10:59.280 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.280 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.280 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.280 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:59.280 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:10:59.280 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.280 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.280 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.280 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:10:59.280 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:59.280 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:10:59.280 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:59.280 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:10:59.280 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:59.280 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:10:59.280 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:59.280 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:10:59.280 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.280 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.280 [2024-11-27 14:10:36.546630] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:59.280 [2024-11-27 14:10:36.549035] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:59.280 [2024-11-27 14:10:36.549111] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:59.280 [2024-11-27 14:10:36.549184] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:59.280 [2024-11-27 14:10:36.549258] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:59.280 [2024-11-27 14:10:36.549292] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:59.280 [2024-11-27 14:10:36.549320] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:59.280 [2024-11-27 14:10:36.549334] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:10:59.280 request: 00:10:59.280 { 00:10:59.280 "name": "raid_bdev1", 00:10:59.280 "raid_level": "concat", 00:10:59.280 "base_bdevs": [ 00:10:59.280 "malloc1", 00:10:59.280 "malloc2", 00:10:59.280 "malloc3" 00:10:59.280 ], 00:10:59.280 "strip_size_kb": 64, 00:10:59.280 "superblock": false, 00:10:59.280 "method": "bdev_raid_create", 00:10:59.280 "req_id": 1 00:10:59.280 } 00:10:59.280 Got JSON-RPC error response 00:10:59.280 response: 00:10:59.280 { 00:10:59.280 "code": -17, 00:10:59.280 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:59.280 } 00:10:59.280 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:59.280 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:10:59.280 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:59.538 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:59.538 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:59.538 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.538 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:10:59.538 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.538 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.538 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.538 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:10:59.538 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:10:59.538 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:59.538 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.538 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.538 [2024-11-27 14:10:36.606563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:59.538 [2024-11-27 14:10:36.606753] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:59.538 [2024-11-27 14:10:36.606846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:10:59.538 [2024-11-27 14:10:36.607097] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:59.538 [2024-11-27 14:10:36.609965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:59.538 [2024-11-27 14:10:36.610115] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:59.538 [2024-11-27 14:10:36.610313] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:59.538 [2024-11-27 14:10:36.610489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:59.538 pt1 00:10:59.538 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.538 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:10:59.538 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:10:59.538 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:10:59.538 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:10:59.538 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:10:59.538 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:10:59.538 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:10:59.538 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:10:59.538 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:10:59.538 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:10:59.538 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:10:59.538 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:59.538 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.538 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:59.538 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.538 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:10:59.538 "name": "raid_bdev1", 00:10:59.538 "uuid": "7f2e24f7-8676-49c2-af92-934826d116e4", 00:10:59.538 "strip_size_kb": 64, 00:10:59.538 "state": "configuring", 00:10:59.538 "raid_level": "concat", 00:10:59.538 "superblock": true, 00:10:59.538 "num_base_bdevs": 3, 00:10:59.538 "num_base_bdevs_discovered": 1, 00:10:59.538 "num_base_bdevs_operational": 3, 00:10:59.538 "base_bdevs_list": [ 00:10:59.538 { 00:10:59.538 "name": "pt1", 00:10:59.538 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:59.538 "is_configured": true, 00:10:59.538 "data_offset": 2048, 00:10:59.538 "data_size": 63488 00:10:59.538 }, 00:10:59.538 { 00:10:59.538 "name": null, 00:10:59.538 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:59.538 "is_configured": false, 00:10:59.538 "data_offset": 2048, 00:10:59.538 "data_size": 63488 00:10:59.538 }, 00:10:59.538 { 00:10:59.538 "name": null, 00:10:59.538 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:59.538 "is_configured": false, 00:10:59.538 "data_offset": 2048, 00:10:59.538 "data_size": 63488 00:10:59.538 } 00:10:59.538 ] 00:10:59.538 }' 00:10:59.538 14:10:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:10:59.538 14:10:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.105 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:00.105 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:00.105 14:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.105 14:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.105 [2024-11-27 14:10:37.114949] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:00.105 [2024-11-27 14:10:37.115034] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.105 [2024-11-27 14:10:37.115076] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:00.105 [2024-11-27 14:10:37.115093] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.105 [2024-11-27 14:10:37.115634] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.105 [2024-11-27 14:10:37.115666] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:00.105 [2024-11-27 14:10:37.115790] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:00.105 [2024-11-27 14:10:37.115831] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:00.105 pt2 00:11:00.105 14:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.105 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:00.105 14:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.105 14:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.105 [2024-11-27 14:10:37.122932] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:00.105 14:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.105 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:00.105 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.105 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:00.105 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.105 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.105 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:00.105 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.105 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.105 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.105 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.105 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.105 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.105 14:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.105 14:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.105 14:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.105 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.105 "name": "raid_bdev1", 00:11:00.105 "uuid": "7f2e24f7-8676-49c2-af92-934826d116e4", 00:11:00.105 "strip_size_kb": 64, 00:11:00.105 "state": "configuring", 00:11:00.105 "raid_level": "concat", 00:11:00.105 "superblock": true, 00:11:00.105 "num_base_bdevs": 3, 00:11:00.105 "num_base_bdevs_discovered": 1, 00:11:00.105 "num_base_bdevs_operational": 3, 00:11:00.105 "base_bdevs_list": [ 00:11:00.105 { 00:11:00.105 "name": "pt1", 00:11:00.105 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:00.105 "is_configured": true, 00:11:00.105 "data_offset": 2048, 00:11:00.105 "data_size": 63488 00:11:00.106 }, 00:11:00.106 { 00:11:00.106 "name": null, 00:11:00.106 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:00.106 "is_configured": false, 00:11:00.106 "data_offset": 0, 00:11:00.106 "data_size": 63488 00:11:00.106 }, 00:11:00.106 { 00:11:00.106 "name": null, 00:11:00.106 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:00.106 "is_configured": false, 00:11:00.106 "data_offset": 2048, 00:11:00.106 "data_size": 63488 00:11:00.106 } 00:11:00.106 ] 00:11:00.106 }' 00:11:00.106 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.106 14:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.674 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:00.674 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:00.674 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:00.674 14:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.674 14:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.674 [2024-11-27 14:10:37.671119] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:00.674 [2024-11-27 14:10:37.671203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.674 [2024-11-27 14:10:37.671232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:00.674 [2024-11-27 14:10:37.671251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.674 [2024-11-27 14:10:37.671840] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.674 [2024-11-27 14:10:37.671872] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:00.674 [2024-11-27 14:10:37.671970] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:00.674 [2024-11-27 14:10:37.672007] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:00.674 pt2 00:11:00.674 14:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.674 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:00.674 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:00.674 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:00.674 14:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.674 14:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.674 [2024-11-27 14:10:37.679098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:00.674 [2024-11-27 14:10:37.679154] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:00.674 [2024-11-27 14:10:37.679176] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:00.674 [2024-11-27 14:10:37.679193] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:00.674 [2024-11-27 14:10:37.679640] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:00.674 [2024-11-27 14:10:37.679673] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:00.674 [2024-11-27 14:10:37.679750] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:00.674 [2024-11-27 14:10:37.679806] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:00.674 [2024-11-27 14:10:37.679958] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:00.674 [2024-11-27 14:10:37.679978] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:00.674 [2024-11-27 14:10:37.680291] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:00.674 [2024-11-27 14:10:37.680481] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:00.674 [2024-11-27 14:10:37.680496] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:00.674 [2024-11-27 14:10:37.680667] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:00.674 pt3 00:11:00.674 14:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.674 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:00.674 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:00.674 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:00.674 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:00.674 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:00.674 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:00.674 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:00.674 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:00.674 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:00.674 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:00.674 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:00.674 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:00.674 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:00.674 14:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.674 14:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:00.674 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:00.674 14:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.674 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:00.674 "name": "raid_bdev1", 00:11:00.674 "uuid": "7f2e24f7-8676-49c2-af92-934826d116e4", 00:11:00.674 "strip_size_kb": 64, 00:11:00.674 "state": "online", 00:11:00.674 "raid_level": "concat", 00:11:00.674 "superblock": true, 00:11:00.674 "num_base_bdevs": 3, 00:11:00.674 "num_base_bdevs_discovered": 3, 00:11:00.674 "num_base_bdevs_operational": 3, 00:11:00.674 "base_bdevs_list": [ 00:11:00.674 { 00:11:00.674 "name": "pt1", 00:11:00.674 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:00.674 "is_configured": true, 00:11:00.674 "data_offset": 2048, 00:11:00.674 "data_size": 63488 00:11:00.674 }, 00:11:00.674 { 00:11:00.674 "name": "pt2", 00:11:00.674 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:00.674 "is_configured": true, 00:11:00.674 "data_offset": 2048, 00:11:00.674 "data_size": 63488 00:11:00.674 }, 00:11:00.674 { 00:11:00.674 "name": "pt3", 00:11:00.674 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:00.674 "is_configured": true, 00:11:00.674 "data_offset": 2048, 00:11:00.674 "data_size": 63488 00:11:00.674 } 00:11:00.674 ] 00:11:00.674 }' 00:11:00.674 14:10:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:00.674 14:10:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.243 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:01.243 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:01.243 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:01.243 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:01.243 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:01.243 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:01.243 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:01.243 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:01.243 14:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.243 14:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.243 [2024-11-27 14:10:38.231725] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:01.243 14:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.243 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:01.243 "name": "raid_bdev1", 00:11:01.243 "aliases": [ 00:11:01.243 "7f2e24f7-8676-49c2-af92-934826d116e4" 00:11:01.243 ], 00:11:01.243 "product_name": "Raid Volume", 00:11:01.243 "block_size": 512, 00:11:01.243 "num_blocks": 190464, 00:11:01.243 "uuid": "7f2e24f7-8676-49c2-af92-934826d116e4", 00:11:01.243 "assigned_rate_limits": { 00:11:01.243 "rw_ios_per_sec": 0, 00:11:01.243 "rw_mbytes_per_sec": 0, 00:11:01.243 "r_mbytes_per_sec": 0, 00:11:01.243 "w_mbytes_per_sec": 0 00:11:01.243 }, 00:11:01.243 "claimed": false, 00:11:01.243 "zoned": false, 00:11:01.243 "supported_io_types": { 00:11:01.243 "read": true, 00:11:01.243 "write": true, 00:11:01.243 "unmap": true, 00:11:01.243 "flush": true, 00:11:01.243 "reset": true, 00:11:01.243 "nvme_admin": false, 00:11:01.243 "nvme_io": false, 00:11:01.243 "nvme_io_md": false, 00:11:01.243 "write_zeroes": true, 00:11:01.243 "zcopy": false, 00:11:01.243 "get_zone_info": false, 00:11:01.243 "zone_management": false, 00:11:01.243 "zone_append": false, 00:11:01.243 "compare": false, 00:11:01.243 "compare_and_write": false, 00:11:01.243 "abort": false, 00:11:01.243 "seek_hole": false, 00:11:01.243 "seek_data": false, 00:11:01.243 "copy": false, 00:11:01.243 "nvme_iov_md": false 00:11:01.243 }, 00:11:01.243 "memory_domains": [ 00:11:01.243 { 00:11:01.243 "dma_device_id": "system", 00:11:01.243 "dma_device_type": 1 00:11:01.243 }, 00:11:01.243 { 00:11:01.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.243 "dma_device_type": 2 00:11:01.243 }, 00:11:01.243 { 00:11:01.243 "dma_device_id": "system", 00:11:01.243 "dma_device_type": 1 00:11:01.243 }, 00:11:01.243 { 00:11:01.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.243 "dma_device_type": 2 00:11:01.243 }, 00:11:01.243 { 00:11:01.243 "dma_device_id": "system", 00:11:01.243 "dma_device_type": 1 00:11:01.243 }, 00:11:01.243 { 00:11:01.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:01.243 "dma_device_type": 2 00:11:01.243 } 00:11:01.243 ], 00:11:01.243 "driver_specific": { 00:11:01.243 "raid": { 00:11:01.243 "uuid": "7f2e24f7-8676-49c2-af92-934826d116e4", 00:11:01.243 "strip_size_kb": 64, 00:11:01.243 "state": "online", 00:11:01.243 "raid_level": "concat", 00:11:01.243 "superblock": true, 00:11:01.243 "num_base_bdevs": 3, 00:11:01.243 "num_base_bdevs_discovered": 3, 00:11:01.243 "num_base_bdevs_operational": 3, 00:11:01.243 "base_bdevs_list": [ 00:11:01.243 { 00:11:01.243 "name": "pt1", 00:11:01.243 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:01.243 "is_configured": true, 00:11:01.243 "data_offset": 2048, 00:11:01.243 "data_size": 63488 00:11:01.243 }, 00:11:01.243 { 00:11:01.243 "name": "pt2", 00:11:01.243 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:01.243 "is_configured": true, 00:11:01.244 "data_offset": 2048, 00:11:01.244 "data_size": 63488 00:11:01.244 }, 00:11:01.244 { 00:11:01.244 "name": "pt3", 00:11:01.244 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:01.244 "is_configured": true, 00:11:01.244 "data_offset": 2048, 00:11:01.244 "data_size": 63488 00:11:01.244 } 00:11:01.244 ] 00:11:01.244 } 00:11:01.244 } 00:11:01.244 }' 00:11:01.244 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:01.244 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:01.244 pt2 00:11:01.244 pt3' 00:11:01.244 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.244 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:01.244 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.244 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:01.244 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.244 14:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.244 14:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.244 14:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.244 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.244 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.244 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.244 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.244 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:01.244 14:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.244 14:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.244 14:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.244 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.244 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.244 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:01.244 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:01.244 14:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.244 14:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.244 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:01.244 14:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.501 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:01.501 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:01.501 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:01.501 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:01.501 14:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:01.501 14:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:01.501 [2024-11-27 14:10:38.539852] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:01.501 14:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:01.501 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 7f2e24f7-8676-49c2-af92-934826d116e4 '!=' 7f2e24f7-8676-49c2-af92-934826d116e4 ']' 00:11:01.501 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:11:01.501 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:01.501 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:01.501 14:10:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 66798 00:11:01.501 14:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 66798 ']' 00:11:01.501 14:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 66798 00:11:01.502 14:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:01.502 14:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:01.502 14:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 66798 00:11:01.502 killing process with pid 66798 00:11:01.502 14:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:01.502 14:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:01.502 14:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 66798' 00:11:01.502 14:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 66798 00:11:01.502 [2024-11-27 14:10:38.629083] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:01.502 14:10:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 66798 00:11:01.502 [2024-11-27 14:10:38.629197] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:01.502 [2024-11-27 14:10:38.629275] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:01.502 [2024-11-27 14:10:38.629296] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:01.759 [2024-11-27 14:10:38.905163] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:02.695 ************************************ 00:11:02.695 END TEST raid_superblock_test 00:11:02.696 ************************************ 00:11:02.696 14:10:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:02.696 00:11:02.696 real 0m5.625s 00:11:02.696 user 0m8.438s 00:11:02.696 sys 0m0.809s 00:11:02.696 14:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.696 14:10:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.954 14:10:40 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:11:02.954 14:10:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:02.954 14:10:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.954 14:10:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:02.954 ************************************ 00:11:02.954 START TEST raid_read_error_test 00:11:02.954 ************************************ 00:11:02.954 14:10:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 read 00:11:02.954 14:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:02.954 14:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:02.955 14:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:02.955 14:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:02.955 14:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.955 14:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:02.955 14:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:02.955 14:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.955 14:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:02.955 14:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:02.955 14:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.955 14:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:02.955 14:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:02.955 14:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:02.955 14:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:02.955 14:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:02.955 14:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:02.955 14:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:02.955 14:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:02.955 14:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:02.955 14:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:02.955 14:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:02.955 14:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:02.955 14:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:02.955 14:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:02.955 14:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.9y2R56ML7m 00:11:02.955 14:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67058 00:11:02.955 14:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:02.955 14:10:40 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67058 00:11:02.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:02.955 14:10:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 67058 ']' 00:11:02.955 14:10:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:02.955 14:10:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:02.955 14:10:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:02.955 14:10:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:02.955 14:10:40 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:02.955 [2024-11-27 14:10:40.149865] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:11:02.955 [2024-11-27 14:10:40.150045] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67058 ] 00:11:03.213 [2024-11-27 14:10:40.339048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:03.471 [2024-11-27 14:10:40.495509] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:03.471 [2024-11-27 14:10:40.719399] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:03.471 [2024-11-27 14:10:40.719487] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:04.039 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:04.039 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:04.039 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:04.039 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:04.039 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.039 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.039 BaseBdev1_malloc 00:11:04.039 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.039 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:04.039 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.039 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.039 true 00:11:04.039 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.039 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:04.039 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.039 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.039 [2024-11-27 14:10:41.226299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:04.039 [2024-11-27 14:10:41.226390] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.039 [2024-11-27 14:10:41.226418] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:04.039 [2024-11-27 14:10:41.226435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.039 [2024-11-27 14:10:41.229388] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.039 [2024-11-27 14:10:41.229452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:04.039 BaseBdev1 00:11:04.039 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.039 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:04.039 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:04.039 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.039 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.039 BaseBdev2_malloc 00:11:04.039 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.039 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:04.039 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.039 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.039 true 00:11:04.039 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.039 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:04.039 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.039 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.039 [2024-11-27 14:10:41.290931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:04.039 [2024-11-27 14:10:41.291249] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.039 [2024-11-27 14:10:41.291299] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:04.039 [2024-11-27 14:10:41.291319] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.039 [2024-11-27 14:10:41.294667] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.039 [2024-11-27 14:10:41.294849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:04.039 BaseBdev2 00:11:04.039 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.039 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:04.039 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:04.039 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.039 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.299 BaseBdev3_malloc 00:11:04.299 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.299 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:04.299 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.299 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.299 true 00:11:04.299 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.299 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:04.299 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.299 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.299 [2024-11-27 14:10:41.371186] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:04.299 [2024-11-27 14:10:41.371464] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:04.299 [2024-11-27 14:10:41.371513] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:04.299 [2024-11-27 14:10:41.371532] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:04.299 [2024-11-27 14:10:41.374779] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:04.299 [2024-11-27 14:10:41.374856] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:04.299 BaseBdev3 00:11:04.299 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.299 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:04.299 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.299 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.299 [2024-11-27 14:10:41.379410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:04.299 [2024-11-27 14:10:41.382183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:04.299 [2024-11-27 14:10:41.382302] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:04.299 [2024-11-27 14:10:41.382631] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:04.299 [2024-11-27 14:10:41.382650] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:04.299 [2024-11-27 14:10:41.383004] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:04.299 [2024-11-27 14:10:41.383303] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:04.299 [2024-11-27 14:10:41.383353] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:04.299 [2024-11-27 14:10:41.383586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:04.299 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.299 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:04.299 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:04.299 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:04.299 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:04.299 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:04.299 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:04.299 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:04.299 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:04.299 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:04.299 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:04.299 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:04.299 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:04.299 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.299 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.299 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.299 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:04.299 "name": "raid_bdev1", 00:11:04.299 "uuid": "e116c9c4-7c77-4749-84c6-678ac8a54d11", 00:11:04.299 "strip_size_kb": 64, 00:11:04.299 "state": "online", 00:11:04.299 "raid_level": "concat", 00:11:04.299 "superblock": true, 00:11:04.299 "num_base_bdevs": 3, 00:11:04.299 "num_base_bdevs_discovered": 3, 00:11:04.299 "num_base_bdevs_operational": 3, 00:11:04.299 "base_bdevs_list": [ 00:11:04.299 { 00:11:04.299 "name": "BaseBdev1", 00:11:04.299 "uuid": "4082bd08-b7fa-5104-87bd-2aab21f35ba7", 00:11:04.299 "is_configured": true, 00:11:04.299 "data_offset": 2048, 00:11:04.299 "data_size": 63488 00:11:04.299 }, 00:11:04.299 { 00:11:04.299 "name": "BaseBdev2", 00:11:04.299 "uuid": "3a1b535e-a324-5eb7-a5f1-a56d77f23914", 00:11:04.299 "is_configured": true, 00:11:04.299 "data_offset": 2048, 00:11:04.299 "data_size": 63488 00:11:04.299 }, 00:11:04.299 { 00:11:04.299 "name": "BaseBdev3", 00:11:04.299 "uuid": "31635507-088c-500b-b8d3-b5dc8a677f97", 00:11:04.299 "is_configured": true, 00:11:04.299 "data_offset": 2048, 00:11:04.299 "data_size": 63488 00:11:04.299 } 00:11:04.299 ] 00:11:04.299 }' 00:11:04.299 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:04.299 14:10:41 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:04.865 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:04.865 14:10:41 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:04.865 [2024-11-27 14:10:42.045279] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:05.801 14:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:05.801 14:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.801 14:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.801 14:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.801 14:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:05.801 14:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:05.801 14:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:05.801 14:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:05.801 14:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:05.801 14:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:05.801 14:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:05.801 14:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:05.801 14:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:05.801 14:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:05.801 14:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:05.801 14:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:05.801 14:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:05.801 14:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:05.801 14:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:05.801 14:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:05.801 14:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:05.801 14:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:05.801 14:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:05.801 "name": "raid_bdev1", 00:11:05.801 "uuid": "e116c9c4-7c77-4749-84c6-678ac8a54d11", 00:11:05.801 "strip_size_kb": 64, 00:11:05.801 "state": "online", 00:11:05.801 "raid_level": "concat", 00:11:05.801 "superblock": true, 00:11:05.801 "num_base_bdevs": 3, 00:11:05.801 "num_base_bdevs_discovered": 3, 00:11:05.801 "num_base_bdevs_operational": 3, 00:11:05.801 "base_bdevs_list": [ 00:11:05.801 { 00:11:05.801 "name": "BaseBdev1", 00:11:05.801 "uuid": "4082bd08-b7fa-5104-87bd-2aab21f35ba7", 00:11:05.801 "is_configured": true, 00:11:05.801 "data_offset": 2048, 00:11:05.801 "data_size": 63488 00:11:05.801 }, 00:11:05.801 { 00:11:05.801 "name": "BaseBdev2", 00:11:05.801 "uuid": "3a1b535e-a324-5eb7-a5f1-a56d77f23914", 00:11:05.801 "is_configured": true, 00:11:05.801 "data_offset": 2048, 00:11:05.801 "data_size": 63488 00:11:05.801 }, 00:11:05.801 { 00:11:05.801 "name": "BaseBdev3", 00:11:05.801 "uuid": "31635507-088c-500b-b8d3-b5dc8a677f97", 00:11:05.801 "is_configured": true, 00:11:05.801 "data_offset": 2048, 00:11:05.801 "data_size": 63488 00:11:05.801 } 00:11:05.801 ] 00:11:05.801 }' 00:11:05.801 14:10:42 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:05.801 14:10:42 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.368 14:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:06.368 14:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:06.368 14:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:06.368 [2024-11-27 14:10:43.476541] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:06.368 [2024-11-27 14:10:43.476576] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:06.368 [2024-11-27 14:10:43.480031] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:06.368 [2024-11-27 14:10:43.480088] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:06.368 [2024-11-27 14:10:43.480141] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:06.368 [2024-11-27 14:10:43.480154] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:06.368 { 00:11:06.368 "results": [ 00:11:06.368 { 00:11:06.368 "job": "raid_bdev1", 00:11:06.368 "core_mask": "0x1", 00:11:06.368 "workload": "randrw", 00:11:06.368 "percentage": 50, 00:11:06.368 "status": "finished", 00:11:06.368 "queue_depth": 1, 00:11:06.368 "io_size": 131072, 00:11:06.368 "runtime": 1.428829, 00:11:06.368 "iops": 10682.873877839826, 00:11:06.368 "mibps": 1335.3592347299782, 00:11:06.368 "io_failed": 1, 00:11:06.368 "io_timeout": 0, 00:11:06.368 "avg_latency_us": 129.95630408242266, 00:11:06.368 "min_latency_us": 37.70181818181818, 00:11:06.368 "max_latency_us": 1951.1854545454546 00:11:06.368 } 00:11:06.368 ], 00:11:06.368 "core_count": 1 00:11:06.368 } 00:11:06.368 14:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:06.369 14:10:43 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67058 00:11:06.369 14:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 67058 ']' 00:11:06.369 14:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 67058 00:11:06.369 14:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:06.369 14:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:06.369 14:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67058 00:11:06.369 killing process with pid 67058 00:11:06.369 14:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:06.369 14:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:06.369 14:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67058' 00:11:06.369 14:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 67058 00:11:06.369 [2024-11-27 14:10:43.519362] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:06.369 14:10:43 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 67058 00:11:06.628 [2024-11-27 14:10:43.727567] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:07.565 14:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.9y2R56ML7m 00:11:07.565 14:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:07.565 14:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:07.565 14:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.70 00:11:07.565 14:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:07.565 14:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:07.565 14:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:07.565 ************************************ 00:11:07.565 END TEST raid_read_error_test 00:11:07.565 ************************************ 00:11:07.565 14:10:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.70 != \0\.\0\0 ]] 00:11:07.565 00:11:07.565 real 0m4.785s 00:11:07.565 user 0m6.017s 00:11:07.565 sys 0m0.590s 00:11:07.565 14:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.565 14:10:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.825 14:10:44 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:11:07.825 14:10:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:07.825 14:10:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.825 14:10:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:07.825 ************************************ 00:11:07.825 START TEST raid_write_error_test 00:11:07.825 ************************************ 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 3 write 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.vUpTqMqxuj 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=67204 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 67204 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 67204 ']' 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:07.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:07.825 14:10:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:07.825 [2024-11-27 14:10:44.964614] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:11:07.825 [2024-11-27 14:10:44.964805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67204 ] 00:11:08.085 [2024-11-27 14:10:45.135587] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.085 [2024-11-27 14:10:45.266150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.343 [2024-11-27 14:10:45.469321] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.343 [2024-11-27 14:10:45.469374] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:08.912 14:10:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.912 14:10:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:08.912 14:10:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:08.912 14:10:45 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:08.912 14:10:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.912 14:10:45 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.912 BaseBdev1_malloc 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.912 true 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.912 [2024-11-27 14:10:46.041837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:08.912 [2024-11-27 14:10:46.041900] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.912 [2024-11-27 14:10:46.041929] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:08.912 [2024-11-27 14:10:46.041945] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.912 [2024-11-27 14:10:46.044686] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.912 [2024-11-27 14:10:46.044747] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:08.912 BaseBdev1 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.912 BaseBdev2_malloc 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.912 true 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.912 [2024-11-27 14:10:46.101735] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:08.912 [2024-11-27 14:10:46.101829] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.912 [2024-11-27 14:10:46.101856] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:08.912 [2024-11-27 14:10:46.101872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.912 [2024-11-27 14:10:46.104656] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.912 [2024-11-27 14:10:46.104717] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:08.912 BaseBdev2 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.912 BaseBdev3_malloc 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.912 true 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.912 [2024-11-27 14:10:46.169021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:08.912 [2024-11-27 14:10:46.169115] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:08.912 [2024-11-27 14:10:46.169142] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:08.912 [2024-11-27 14:10:46.169159] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:08.912 [2024-11-27 14:10:46.172082] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:08.912 [2024-11-27 14:10:46.172131] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:08.912 BaseBdev3 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.912 [2024-11-27 14:10:46.177130] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:08.912 [2024-11-27 14:10:46.179701] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:08.912 [2024-11-27 14:10:46.179990] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:08.912 [2024-11-27 14:10:46.180266] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:08.912 [2024-11-27 14:10:46.180285] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:08.912 [2024-11-27 14:10:46.180620] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:08.912 [2024-11-27 14:10:46.180855] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:08.912 [2024-11-27 14:10:46.180882] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:08.912 [2024-11-27 14:10:46.181129] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.912 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.172 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:09.172 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.172 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:09.172 "name": "raid_bdev1", 00:11:09.172 "uuid": "3268a441-7ae5-4622-94bb-d68fe926ed50", 00:11:09.172 "strip_size_kb": 64, 00:11:09.172 "state": "online", 00:11:09.172 "raid_level": "concat", 00:11:09.172 "superblock": true, 00:11:09.172 "num_base_bdevs": 3, 00:11:09.172 "num_base_bdevs_discovered": 3, 00:11:09.172 "num_base_bdevs_operational": 3, 00:11:09.172 "base_bdevs_list": [ 00:11:09.172 { 00:11:09.172 "name": "BaseBdev1", 00:11:09.172 "uuid": "1acb478e-30c0-5425-b26e-172c6a8bd433", 00:11:09.172 "is_configured": true, 00:11:09.172 "data_offset": 2048, 00:11:09.172 "data_size": 63488 00:11:09.172 }, 00:11:09.172 { 00:11:09.172 "name": "BaseBdev2", 00:11:09.172 "uuid": "02aeb624-b7d3-5f27-be5a-292060e2a587", 00:11:09.172 "is_configured": true, 00:11:09.172 "data_offset": 2048, 00:11:09.172 "data_size": 63488 00:11:09.172 }, 00:11:09.172 { 00:11:09.172 "name": "BaseBdev3", 00:11:09.172 "uuid": "ff24b4b9-8f76-5f13-8343-91c7fb5727c1", 00:11:09.172 "is_configured": true, 00:11:09.172 "data_offset": 2048, 00:11:09.172 "data_size": 63488 00:11:09.172 } 00:11:09.172 ] 00:11:09.172 }' 00:11:09.172 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:09.172 14:10:46 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:09.738 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:09.738 14:10:46 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:09.738 [2024-11-27 14:10:46.846887] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:10.693 14:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:10.693 14:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.693 14:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.693 14:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.693 14:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:10.693 14:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:11:10.693 14:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:10.693 14:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:10.693 14:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:10.693 14:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:10.693 14:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:11:10.693 14:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:10.693 14:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:10.693 14:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:10.693 14:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:10.693 14:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:10.693 14:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:10.693 14:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:10.693 14:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.693 14:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:10.693 14:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:10.693 14:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.693 14:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:10.693 "name": "raid_bdev1", 00:11:10.693 "uuid": "3268a441-7ae5-4622-94bb-d68fe926ed50", 00:11:10.693 "strip_size_kb": 64, 00:11:10.693 "state": "online", 00:11:10.693 "raid_level": "concat", 00:11:10.693 "superblock": true, 00:11:10.693 "num_base_bdevs": 3, 00:11:10.693 "num_base_bdevs_discovered": 3, 00:11:10.693 "num_base_bdevs_operational": 3, 00:11:10.693 "base_bdevs_list": [ 00:11:10.693 { 00:11:10.693 "name": "BaseBdev1", 00:11:10.693 "uuid": "1acb478e-30c0-5425-b26e-172c6a8bd433", 00:11:10.693 "is_configured": true, 00:11:10.693 "data_offset": 2048, 00:11:10.693 "data_size": 63488 00:11:10.693 }, 00:11:10.693 { 00:11:10.693 "name": "BaseBdev2", 00:11:10.693 "uuid": "02aeb624-b7d3-5f27-be5a-292060e2a587", 00:11:10.693 "is_configured": true, 00:11:10.693 "data_offset": 2048, 00:11:10.693 "data_size": 63488 00:11:10.693 }, 00:11:10.693 { 00:11:10.693 "name": "BaseBdev3", 00:11:10.693 "uuid": "ff24b4b9-8f76-5f13-8343-91c7fb5727c1", 00:11:10.693 "is_configured": true, 00:11:10.693 "data_offset": 2048, 00:11:10.693 "data_size": 63488 00:11:10.693 } 00:11:10.693 ] 00:11:10.693 }' 00:11:10.693 14:10:47 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:10.693 14:10:47 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.262 14:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:11.262 14:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.262 14:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:11.262 [2024-11-27 14:10:48.261737] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:11.262 [2024-11-27 14:10:48.261958] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:11.262 [2024-11-27 14:10:48.265535] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:11.262 { 00:11:11.262 "results": [ 00:11:11.262 { 00:11:11.262 "job": "raid_bdev1", 00:11:11.262 "core_mask": "0x1", 00:11:11.262 "workload": "randrw", 00:11:11.262 "percentage": 50, 00:11:11.262 "status": "finished", 00:11:11.262 "queue_depth": 1, 00:11:11.262 "io_size": 131072, 00:11:11.262 "runtime": 1.412536, 00:11:11.262 "iops": 10914.412092859935, 00:11:11.262 "mibps": 1364.3015116074919, 00:11:11.262 "io_failed": 1, 00:11:11.262 "io_timeout": 0, 00:11:11.262 "avg_latency_us": 127.3693510536681, 00:11:11.262 "min_latency_us": 37.236363636363635, 00:11:11.262 "max_latency_us": 1802.24 00:11:11.262 } 00:11:11.262 ], 00:11:11.262 "core_count": 1 00:11:11.262 } 00:11:11.262 [2024-11-27 14:10:48.265767] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.262 [2024-11-27 14:10:48.265853] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:11.262 [2024-11-27 14:10:48.265875] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:11.262 14:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.262 14:10:48 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 67204 00:11:11.262 14:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 67204 ']' 00:11:11.262 14:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 67204 00:11:11.262 14:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:11.262 14:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:11.262 14:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67204 00:11:11.262 killing process with pid 67204 00:11:11.262 14:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:11.262 14:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:11.262 14:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67204' 00:11:11.262 14:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 67204 00:11:11.262 [2024-11-27 14:10:48.302444] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:11.262 14:10:48 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 67204 00:11:11.262 [2024-11-27 14:10:48.498177] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:12.638 14:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.vUpTqMqxuj 00:11:12.638 14:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:12.638 14:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:12.638 14:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:11:12.638 14:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:11:12.638 14:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:12.638 14:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:11:12.638 14:10:49 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:11:12.638 00:11:12.638 real 0m4.695s 00:11:12.638 user 0m5.916s 00:11:12.638 sys 0m0.554s 00:11:12.638 ************************************ 00:11:12.638 END TEST raid_write_error_test 00:11:12.638 ************************************ 00:11:12.638 14:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:12.638 14:10:49 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.638 14:10:49 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:12.638 14:10:49 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:11:12.638 14:10:49 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:12.638 14:10:49 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:12.638 14:10:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:12.638 ************************************ 00:11:12.638 START TEST raid_state_function_test 00:11:12.638 ************************************ 00:11:12.638 14:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 false 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:12.639 Process raid pid: 67342 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=67342 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67342' 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 67342 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 67342 ']' 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:12.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:12.639 14:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.639 [2024-11-27 14:10:49.732605] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:11:12.639 [2024-11-27 14:10:49.732872] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:12.899 [2024-11-27 14:10:49.918212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:12.899 [2024-11-27 14:10:50.040729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:13.158 [2024-11-27 14:10:50.244210] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.158 [2024-11-27 14:10:50.244540] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:13.727 14:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:13.727 14:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:13.727 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:13.727 14:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.727 14:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.727 [2024-11-27 14:10:50.756016] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:13.727 [2024-11-27 14:10:50.756079] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:13.727 [2024-11-27 14:10:50.756096] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:13.727 [2024-11-27 14:10:50.756129] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:13.727 [2024-11-27 14:10:50.756153] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:13.727 [2024-11-27 14:10:50.756167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:13.727 14:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.727 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:13.727 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:13.727 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:13.727 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:13.727 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:13.727 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:13.727 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:13.727 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:13.727 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:13.727 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:13.727 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:13.727 14:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:13.727 14:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:13.727 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:13.727 14:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:13.727 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:13.727 "name": "Existed_Raid", 00:11:13.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.727 "strip_size_kb": 0, 00:11:13.727 "state": "configuring", 00:11:13.727 "raid_level": "raid1", 00:11:13.727 "superblock": false, 00:11:13.727 "num_base_bdevs": 3, 00:11:13.727 "num_base_bdevs_discovered": 0, 00:11:13.727 "num_base_bdevs_operational": 3, 00:11:13.727 "base_bdevs_list": [ 00:11:13.727 { 00:11:13.727 "name": "BaseBdev1", 00:11:13.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.727 "is_configured": false, 00:11:13.727 "data_offset": 0, 00:11:13.727 "data_size": 0 00:11:13.727 }, 00:11:13.727 { 00:11:13.727 "name": "BaseBdev2", 00:11:13.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.727 "is_configured": false, 00:11:13.727 "data_offset": 0, 00:11:13.727 "data_size": 0 00:11:13.727 }, 00:11:13.727 { 00:11:13.727 "name": "BaseBdev3", 00:11:13.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:13.727 "is_configured": false, 00:11:13.727 "data_offset": 0, 00:11:13.727 "data_size": 0 00:11:13.727 } 00:11:13.727 ] 00:11:13.727 }' 00:11:13.727 14:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:13.727 14:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.296 [2024-11-27 14:10:51.288119] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:14.296 [2024-11-27 14:10:51.288239] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.296 [2024-11-27 14:10:51.296087] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:14.296 [2024-11-27 14:10:51.296140] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:14.296 [2024-11-27 14:10:51.296156] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:14.296 [2024-11-27 14:10:51.296172] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:14.296 [2024-11-27 14:10:51.296181] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:14.296 [2024-11-27 14:10:51.296195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.296 [2024-11-27 14:10:51.343096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:14.296 BaseBdev1 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.296 [ 00:11:14.296 { 00:11:14.296 "name": "BaseBdev1", 00:11:14.296 "aliases": [ 00:11:14.296 "3cbe0707-7af1-4177-996b-06bc7d477dc7" 00:11:14.296 ], 00:11:14.296 "product_name": "Malloc disk", 00:11:14.296 "block_size": 512, 00:11:14.296 "num_blocks": 65536, 00:11:14.296 "uuid": "3cbe0707-7af1-4177-996b-06bc7d477dc7", 00:11:14.296 "assigned_rate_limits": { 00:11:14.296 "rw_ios_per_sec": 0, 00:11:14.296 "rw_mbytes_per_sec": 0, 00:11:14.296 "r_mbytes_per_sec": 0, 00:11:14.296 "w_mbytes_per_sec": 0 00:11:14.296 }, 00:11:14.296 "claimed": true, 00:11:14.296 "claim_type": "exclusive_write", 00:11:14.296 "zoned": false, 00:11:14.296 "supported_io_types": { 00:11:14.296 "read": true, 00:11:14.296 "write": true, 00:11:14.296 "unmap": true, 00:11:14.296 "flush": true, 00:11:14.296 "reset": true, 00:11:14.296 "nvme_admin": false, 00:11:14.296 "nvme_io": false, 00:11:14.296 "nvme_io_md": false, 00:11:14.296 "write_zeroes": true, 00:11:14.296 "zcopy": true, 00:11:14.296 "get_zone_info": false, 00:11:14.296 "zone_management": false, 00:11:14.296 "zone_append": false, 00:11:14.296 "compare": false, 00:11:14.296 "compare_and_write": false, 00:11:14.296 "abort": true, 00:11:14.296 "seek_hole": false, 00:11:14.296 "seek_data": false, 00:11:14.296 "copy": true, 00:11:14.296 "nvme_iov_md": false 00:11:14.296 }, 00:11:14.296 "memory_domains": [ 00:11:14.296 { 00:11:14.296 "dma_device_id": "system", 00:11:14.296 "dma_device_type": 1 00:11:14.296 }, 00:11:14.296 { 00:11:14.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:14.296 "dma_device_type": 2 00:11:14.296 } 00:11:14.296 ], 00:11:14.296 "driver_specific": {} 00:11:14.296 } 00:11:14.296 ] 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.296 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.296 "name": "Existed_Raid", 00:11:14.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.296 "strip_size_kb": 0, 00:11:14.296 "state": "configuring", 00:11:14.296 "raid_level": "raid1", 00:11:14.296 "superblock": false, 00:11:14.296 "num_base_bdevs": 3, 00:11:14.297 "num_base_bdevs_discovered": 1, 00:11:14.297 "num_base_bdevs_operational": 3, 00:11:14.297 "base_bdevs_list": [ 00:11:14.297 { 00:11:14.297 "name": "BaseBdev1", 00:11:14.297 "uuid": "3cbe0707-7af1-4177-996b-06bc7d477dc7", 00:11:14.297 "is_configured": true, 00:11:14.297 "data_offset": 0, 00:11:14.297 "data_size": 65536 00:11:14.297 }, 00:11:14.297 { 00:11:14.297 "name": "BaseBdev2", 00:11:14.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.297 "is_configured": false, 00:11:14.297 "data_offset": 0, 00:11:14.297 "data_size": 0 00:11:14.297 }, 00:11:14.297 { 00:11:14.297 "name": "BaseBdev3", 00:11:14.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.297 "is_configured": false, 00:11:14.297 "data_offset": 0, 00:11:14.297 "data_size": 0 00:11:14.297 } 00:11:14.297 ] 00:11:14.297 }' 00:11:14.297 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.297 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.865 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:14.865 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.865 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.865 [2024-11-27 14:10:51.911395] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:14.865 [2024-11-27 14:10:51.911627] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:14.865 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.865 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:14.865 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.865 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.865 [2024-11-27 14:10:51.923422] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:14.865 [2024-11-27 14:10:51.925885] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:14.865 [2024-11-27 14:10:51.925936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:14.865 [2024-11-27 14:10:51.925953] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:14.865 [2024-11-27 14:10:51.925968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:14.865 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.865 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:14.865 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:14.865 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:14.865 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:14.865 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:14.865 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:14.865 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:14.865 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:14.865 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:14.865 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:14.865 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:14.865 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:14.865 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:14.865 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:14.865 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:14.865 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:14.865 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:14.865 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:14.865 "name": "Existed_Raid", 00:11:14.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.865 "strip_size_kb": 0, 00:11:14.865 "state": "configuring", 00:11:14.865 "raid_level": "raid1", 00:11:14.865 "superblock": false, 00:11:14.865 "num_base_bdevs": 3, 00:11:14.865 "num_base_bdevs_discovered": 1, 00:11:14.865 "num_base_bdevs_operational": 3, 00:11:14.865 "base_bdevs_list": [ 00:11:14.865 { 00:11:14.865 "name": "BaseBdev1", 00:11:14.865 "uuid": "3cbe0707-7af1-4177-996b-06bc7d477dc7", 00:11:14.865 "is_configured": true, 00:11:14.865 "data_offset": 0, 00:11:14.865 "data_size": 65536 00:11:14.865 }, 00:11:14.865 { 00:11:14.865 "name": "BaseBdev2", 00:11:14.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.865 "is_configured": false, 00:11:14.865 "data_offset": 0, 00:11:14.865 "data_size": 0 00:11:14.865 }, 00:11:14.865 { 00:11:14.865 "name": "BaseBdev3", 00:11:14.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:14.865 "is_configured": false, 00:11:14.865 "data_offset": 0, 00:11:14.865 "data_size": 0 00:11:14.865 } 00:11:14.865 ] 00:11:14.865 }' 00:11:14.865 14:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:14.865 14:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.434 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.435 [2024-11-27 14:10:52.504138] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:15.435 BaseBdev2 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.435 [ 00:11:15.435 { 00:11:15.435 "name": "BaseBdev2", 00:11:15.435 "aliases": [ 00:11:15.435 "ab696487-2596-42a9-9ef2-bf4855a89c8a" 00:11:15.435 ], 00:11:15.435 "product_name": "Malloc disk", 00:11:15.435 "block_size": 512, 00:11:15.435 "num_blocks": 65536, 00:11:15.435 "uuid": "ab696487-2596-42a9-9ef2-bf4855a89c8a", 00:11:15.435 "assigned_rate_limits": { 00:11:15.435 "rw_ios_per_sec": 0, 00:11:15.435 "rw_mbytes_per_sec": 0, 00:11:15.435 "r_mbytes_per_sec": 0, 00:11:15.435 "w_mbytes_per_sec": 0 00:11:15.435 }, 00:11:15.435 "claimed": true, 00:11:15.435 "claim_type": "exclusive_write", 00:11:15.435 "zoned": false, 00:11:15.435 "supported_io_types": { 00:11:15.435 "read": true, 00:11:15.435 "write": true, 00:11:15.435 "unmap": true, 00:11:15.435 "flush": true, 00:11:15.435 "reset": true, 00:11:15.435 "nvme_admin": false, 00:11:15.435 "nvme_io": false, 00:11:15.435 "nvme_io_md": false, 00:11:15.435 "write_zeroes": true, 00:11:15.435 "zcopy": true, 00:11:15.435 "get_zone_info": false, 00:11:15.435 "zone_management": false, 00:11:15.435 "zone_append": false, 00:11:15.435 "compare": false, 00:11:15.435 "compare_and_write": false, 00:11:15.435 "abort": true, 00:11:15.435 "seek_hole": false, 00:11:15.435 "seek_data": false, 00:11:15.435 "copy": true, 00:11:15.435 "nvme_iov_md": false 00:11:15.435 }, 00:11:15.435 "memory_domains": [ 00:11:15.435 { 00:11:15.435 "dma_device_id": "system", 00:11:15.435 "dma_device_type": 1 00:11:15.435 }, 00:11:15.435 { 00:11:15.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:15.435 "dma_device_type": 2 00:11:15.435 } 00:11:15.435 ], 00:11:15.435 "driver_specific": {} 00:11:15.435 } 00:11:15.435 ] 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:15.435 "name": "Existed_Raid", 00:11:15.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.435 "strip_size_kb": 0, 00:11:15.435 "state": "configuring", 00:11:15.435 "raid_level": "raid1", 00:11:15.435 "superblock": false, 00:11:15.435 "num_base_bdevs": 3, 00:11:15.435 "num_base_bdevs_discovered": 2, 00:11:15.435 "num_base_bdevs_operational": 3, 00:11:15.435 "base_bdevs_list": [ 00:11:15.435 { 00:11:15.435 "name": "BaseBdev1", 00:11:15.435 "uuid": "3cbe0707-7af1-4177-996b-06bc7d477dc7", 00:11:15.435 "is_configured": true, 00:11:15.435 "data_offset": 0, 00:11:15.435 "data_size": 65536 00:11:15.435 }, 00:11:15.435 { 00:11:15.435 "name": "BaseBdev2", 00:11:15.435 "uuid": "ab696487-2596-42a9-9ef2-bf4855a89c8a", 00:11:15.435 "is_configured": true, 00:11:15.435 "data_offset": 0, 00:11:15.435 "data_size": 65536 00:11:15.435 }, 00:11:15.435 { 00:11:15.435 "name": "BaseBdev3", 00:11:15.435 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:15.435 "is_configured": false, 00:11:15.435 "data_offset": 0, 00:11:15.435 "data_size": 0 00:11:15.435 } 00:11:15.435 ] 00:11:15.435 }' 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:15.435 14:10:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.080 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.081 [2024-11-27 14:10:53.140493] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:16.081 [2024-11-27 14:10:53.140552] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:16.081 [2024-11-27 14:10:53.140571] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:16.081 [2024-11-27 14:10:53.140970] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:16.081 [2024-11-27 14:10:53.141215] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:16.081 [2024-11-27 14:10:53.141231] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:16.081 [2024-11-27 14:10:53.141555] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:16.081 BaseBdev3 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.081 [ 00:11:16.081 { 00:11:16.081 "name": "BaseBdev3", 00:11:16.081 "aliases": [ 00:11:16.081 "d3648ffe-c1aa-4e60-abbb-1c96b9a50ed4" 00:11:16.081 ], 00:11:16.081 "product_name": "Malloc disk", 00:11:16.081 "block_size": 512, 00:11:16.081 "num_blocks": 65536, 00:11:16.081 "uuid": "d3648ffe-c1aa-4e60-abbb-1c96b9a50ed4", 00:11:16.081 "assigned_rate_limits": { 00:11:16.081 "rw_ios_per_sec": 0, 00:11:16.081 "rw_mbytes_per_sec": 0, 00:11:16.081 "r_mbytes_per_sec": 0, 00:11:16.081 "w_mbytes_per_sec": 0 00:11:16.081 }, 00:11:16.081 "claimed": true, 00:11:16.081 "claim_type": "exclusive_write", 00:11:16.081 "zoned": false, 00:11:16.081 "supported_io_types": { 00:11:16.081 "read": true, 00:11:16.081 "write": true, 00:11:16.081 "unmap": true, 00:11:16.081 "flush": true, 00:11:16.081 "reset": true, 00:11:16.081 "nvme_admin": false, 00:11:16.081 "nvme_io": false, 00:11:16.081 "nvme_io_md": false, 00:11:16.081 "write_zeroes": true, 00:11:16.081 "zcopy": true, 00:11:16.081 "get_zone_info": false, 00:11:16.081 "zone_management": false, 00:11:16.081 "zone_append": false, 00:11:16.081 "compare": false, 00:11:16.081 "compare_and_write": false, 00:11:16.081 "abort": true, 00:11:16.081 "seek_hole": false, 00:11:16.081 "seek_data": false, 00:11:16.081 "copy": true, 00:11:16.081 "nvme_iov_md": false 00:11:16.081 }, 00:11:16.081 "memory_domains": [ 00:11:16.081 { 00:11:16.081 "dma_device_id": "system", 00:11:16.081 "dma_device_type": 1 00:11:16.081 }, 00:11:16.081 { 00:11:16.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.081 "dma_device_type": 2 00:11:16.081 } 00:11:16.081 ], 00:11:16.081 "driver_specific": {} 00:11:16.081 } 00:11:16.081 ] 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.081 "name": "Existed_Raid", 00:11:16.081 "uuid": "e6a1031c-78d1-433b-944f-df8bf24c86e9", 00:11:16.081 "strip_size_kb": 0, 00:11:16.081 "state": "online", 00:11:16.081 "raid_level": "raid1", 00:11:16.081 "superblock": false, 00:11:16.081 "num_base_bdevs": 3, 00:11:16.081 "num_base_bdevs_discovered": 3, 00:11:16.081 "num_base_bdevs_operational": 3, 00:11:16.081 "base_bdevs_list": [ 00:11:16.081 { 00:11:16.081 "name": "BaseBdev1", 00:11:16.081 "uuid": "3cbe0707-7af1-4177-996b-06bc7d477dc7", 00:11:16.081 "is_configured": true, 00:11:16.081 "data_offset": 0, 00:11:16.081 "data_size": 65536 00:11:16.081 }, 00:11:16.081 { 00:11:16.081 "name": "BaseBdev2", 00:11:16.081 "uuid": "ab696487-2596-42a9-9ef2-bf4855a89c8a", 00:11:16.081 "is_configured": true, 00:11:16.081 "data_offset": 0, 00:11:16.081 "data_size": 65536 00:11:16.081 }, 00:11:16.081 { 00:11:16.081 "name": "BaseBdev3", 00:11:16.081 "uuid": "d3648ffe-c1aa-4e60-abbb-1c96b9a50ed4", 00:11:16.081 "is_configured": true, 00:11:16.081 "data_offset": 0, 00:11:16.081 "data_size": 65536 00:11:16.081 } 00:11:16.081 ] 00:11:16.081 }' 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.081 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.649 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:16.649 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:16.649 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:16.649 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:16.649 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:16.649 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:16.649 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:16.649 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.649 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.649 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:16.649 [2024-11-27 14:10:53.713197] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:16.649 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.649 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:16.649 "name": "Existed_Raid", 00:11:16.649 "aliases": [ 00:11:16.649 "e6a1031c-78d1-433b-944f-df8bf24c86e9" 00:11:16.649 ], 00:11:16.649 "product_name": "Raid Volume", 00:11:16.649 "block_size": 512, 00:11:16.649 "num_blocks": 65536, 00:11:16.649 "uuid": "e6a1031c-78d1-433b-944f-df8bf24c86e9", 00:11:16.649 "assigned_rate_limits": { 00:11:16.649 "rw_ios_per_sec": 0, 00:11:16.649 "rw_mbytes_per_sec": 0, 00:11:16.649 "r_mbytes_per_sec": 0, 00:11:16.649 "w_mbytes_per_sec": 0 00:11:16.649 }, 00:11:16.649 "claimed": false, 00:11:16.649 "zoned": false, 00:11:16.649 "supported_io_types": { 00:11:16.649 "read": true, 00:11:16.649 "write": true, 00:11:16.649 "unmap": false, 00:11:16.649 "flush": false, 00:11:16.649 "reset": true, 00:11:16.649 "nvme_admin": false, 00:11:16.649 "nvme_io": false, 00:11:16.649 "nvme_io_md": false, 00:11:16.649 "write_zeroes": true, 00:11:16.649 "zcopy": false, 00:11:16.649 "get_zone_info": false, 00:11:16.649 "zone_management": false, 00:11:16.649 "zone_append": false, 00:11:16.649 "compare": false, 00:11:16.649 "compare_and_write": false, 00:11:16.649 "abort": false, 00:11:16.649 "seek_hole": false, 00:11:16.649 "seek_data": false, 00:11:16.649 "copy": false, 00:11:16.649 "nvme_iov_md": false 00:11:16.649 }, 00:11:16.649 "memory_domains": [ 00:11:16.649 { 00:11:16.649 "dma_device_id": "system", 00:11:16.649 "dma_device_type": 1 00:11:16.649 }, 00:11:16.649 { 00:11:16.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.649 "dma_device_type": 2 00:11:16.649 }, 00:11:16.649 { 00:11:16.649 "dma_device_id": "system", 00:11:16.649 "dma_device_type": 1 00:11:16.649 }, 00:11:16.649 { 00:11:16.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.649 "dma_device_type": 2 00:11:16.649 }, 00:11:16.649 { 00:11:16.649 "dma_device_id": "system", 00:11:16.649 "dma_device_type": 1 00:11:16.649 }, 00:11:16.649 { 00:11:16.649 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:16.649 "dma_device_type": 2 00:11:16.649 } 00:11:16.649 ], 00:11:16.649 "driver_specific": { 00:11:16.649 "raid": { 00:11:16.649 "uuid": "e6a1031c-78d1-433b-944f-df8bf24c86e9", 00:11:16.649 "strip_size_kb": 0, 00:11:16.649 "state": "online", 00:11:16.649 "raid_level": "raid1", 00:11:16.649 "superblock": false, 00:11:16.649 "num_base_bdevs": 3, 00:11:16.649 "num_base_bdevs_discovered": 3, 00:11:16.649 "num_base_bdevs_operational": 3, 00:11:16.649 "base_bdevs_list": [ 00:11:16.649 { 00:11:16.649 "name": "BaseBdev1", 00:11:16.649 "uuid": "3cbe0707-7af1-4177-996b-06bc7d477dc7", 00:11:16.649 "is_configured": true, 00:11:16.649 "data_offset": 0, 00:11:16.649 "data_size": 65536 00:11:16.649 }, 00:11:16.649 { 00:11:16.649 "name": "BaseBdev2", 00:11:16.649 "uuid": "ab696487-2596-42a9-9ef2-bf4855a89c8a", 00:11:16.649 "is_configured": true, 00:11:16.649 "data_offset": 0, 00:11:16.649 "data_size": 65536 00:11:16.649 }, 00:11:16.649 { 00:11:16.649 "name": "BaseBdev3", 00:11:16.649 "uuid": "d3648ffe-c1aa-4e60-abbb-1c96b9a50ed4", 00:11:16.649 "is_configured": true, 00:11:16.649 "data_offset": 0, 00:11:16.649 "data_size": 65536 00:11:16.649 } 00:11:16.649 ] 00:11:16.649 } 00:11:16.649 } 00:11:16.649 }' 00:11:16.649 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:16.649 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:16.649 BaseBdev2 00:11:16.649 BaseBdev3' 00:11:16.650 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.650 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:16.650 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.650 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:16.650 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.650 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.650 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.650 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.650 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.650 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.650 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.650 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:16.650 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.650 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.650 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.909 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.909 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.909 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.909 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:16.909 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:16.909 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.909 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.909 14:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:16.909 14:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.909 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:16.909 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:16.909 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:16.909 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.909 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.909 [2024-11-27 14:10:54.032869] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:16.909 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.909 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:16.909 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:16.909 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:16.909 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:16.909 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:16.909 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:16.909 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:16.909 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:16.909 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:16.909 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:16.909 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:16.909 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:16.909 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:16.909 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:16.909 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:16.909 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:16.909 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:16.909 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.909 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:16.909 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:16.909 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:16.909 "name": "Existed_Raid", 00:11:16.909 "uuid": "e6a1031c-78d1-433b-944f-df8bf24c86e9", 00:11:16.909 "strip_size_kb": 0, 00:11:16.909 "state": "online", 00:11:16.909 "raid_level": "raid1", 00:11:16.909 "superblock": false, 00:11:16.909 "num_base_bdevs": 3, 00:11:16.909 "num_base_bdevs_discovered": 2, 00:11:16.909 "num_base_bdevs_operational": 2, 00:11:16.909 "base_bdevs_list": [ 00:11:16.909 { 00:11:16.909 "name": null, 00:11:16.909 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:16.909 "is_configured": false, 00:11:16.909 "data_offset": 0, 00:11:16.909 "data_size": 65536 00:11:16.909 }, 00:11:16.909 { 00:11:16.909 "name": "BaseBdev2", 00:11:16.909 "uuid": "ab696487-2596-42a9-9ef2-bf4855a89c8a", 00:11:16.909 "is_configured": true, 00:11:16.909 "data_offset": 0, 00:11:16.909 "data_size": 65536 00:11:16.909 }, 00:11:16.909 { 00:11:16.909 "name": "BaseBdev3", 00:11:16.909 "uuid": "d3648ffe-c1aa-4e60-abbb-1c96b9a50ed4", 00:11:16.909 "is_configured": true, 00:11:16.909 "data_offset": 0, 00:11:16.909 "data_size": 65536 00:11:16.909 } 00:11:16.909 ] 00:11:16.909 }' 00:11:16.909 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:16.909 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.477 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:17.477 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:17.477 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.477 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.477 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.477 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:17.477 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.477 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:17.477 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:17.477 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:17.477 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.477 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.477 [2024-11-27 14:10:54.682069] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:17.736 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.737 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:17.737 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:17.737 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.737 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:17.737 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.737 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.737 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.737 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:17.737 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:17.737 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:17.737 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.737 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.737 [2024-11-27 14:10:54.838965] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:17.737 [2024-11-27 14:10:54.839280] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:17.737 [2024-11-27 14:10:54.928953] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:17.737 [2024-11-27 14:10:54.929026] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:17.737 [2024-11-27 14:10:54.929045] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:17.737 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.737 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:17.737 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:17.737 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.737 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:17.737 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.737 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.737 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.737 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:17.737 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:17.737 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:17.737 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:17.737 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:17.737 14:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:17.737 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.737 14:10:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.997 BaseBdev2 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.997 [ 00:11:17.997 { 00:11:17.997 "name": "BaseBdev2", 00:11:17.997 "aliases": [ 00:11:17.997 "f9f38881-2277-4eee-b059-8e3fd8bd07b9" 00:11:17.997 ], 00:11:17.997 "product_name": "Malloc disk", 00:11:17.997 "block_size": 512, 00:11:17.997 "num_blocks": 65536, 00:11:17.997 "uuid": "f9f38881-2277-4eee-b059-8e3fd8bd07b9", 00:11:17.997 "assigned_rate_limits": { 00:11:17.997 "rw_ios_per_sec": 0, 00:11:17.997 "rw_mbytes_per_sec": 0, 00:11:17.997 "r_mbytes_per_sec": 0, 00:11:17.997 "w_mbytes_per_sec": 0 00:11:17.997 }, 00:11:17.997 "claimed": false, 00:11:17.997 "zoned": false, 00:11:17.997 "supported_io_types": { 00:11:17.997 "read": true, 00:11:17.997 "write": true, 00:11:17.997 "unmap": true, 00:11:17.997 "flush": true, 00:11:17.997 "reset": true, 00:11:17.997 "nvme_admin": false, 00:11:17.997 "nvme_io": false, 00:11:17.997 "nvme_io_md": false, 00:11:17.997 "write_zeroes": true, 00:11:17.997 "zcopy": true, 00:11:17.997 "get_zone_info": false, 00:11:17.997 "zone_management": false, 00:11:17.997 "zone_append": false, 00:11:17.997 "compare": false, 00:11:17.997 "compare_and_write": false, 00:11:17.997 "abort": true, 00:11:17.997 "seek_hole": false, 00:11:17.997 "seek_data": false, 00:11:17.997 "copy": true, 00:11:17.997 "nvme_iov_md": false 00:11:17.997 }, 00:11:17.997 "memory_domains": [ 00:11:17.997 { 00:11:17.997 "dma_device_id": "system", 00:11:17.997 "dma_device_type": 1 00:11:17.997 }, 00:11:17.997 { 00:11:17.997 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.997 "dma_device_type": 2 00:11:17.997 } 00:11:17.997 ], 00:11:17.997 "driver_specific": {} 00:11:17.997 } 00:11:17.997 ] 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.997 BaseBdev3 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.997 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.997 [ 00:11:17.997 { 00:11:17.997 "name": "BaseBdev3", 00:11:17.997 "aliases": [ 00:11:17.997 "27ed476b-cd63-4e91-9dfc-338dccc079da" 00:11:17.997 ], 00:11:17.997 "product_name": "Malloc disk", 00:11:17.997 "block_size": 512, 00:11:17.997 "num_blocks": 65536, 00:11:17.997 "uuid": "27ed476b-cd63-4e91-9dfc-338dccc079da", 00:11:17.997 "assigned_rate_limits": { 00:11:17.997 "rw_ios_per_sec": 0, 00:11:17.997 "rw_mbytes_per_sec": 0, 00:11:17.997 "r_mbytes_per_sec": 0, 00:11:17.997 "w_mbytes_per_sec": 0 00:11:17.997 }, 00:11:17.997 "claimed": false, 00:11:17.997 "zoned": false, 00:11:17.997 "supported_io_types": { 00:11:17.997 "read": true, 00:11:17.997 "write": true, 00:11:17.997 "unmap": true, 00:11:17.997 "flush": true, 00:11:17.997 "reset": true, 00:11:17.997 "nvme_admin": false, 00:11:17.997 "nvme_io": false, 00:11:17.997 "nvme_io_md": false, 00:11:17.997 "write_zeroes": true, 00:11:17.997 "zcopy": true, 00:11:17.997 "get_zone_info": false, 00:11:17.997 "zone_management": false, 00:11:17.997 "zone_append": false, 00:11:17.997 "compare": false, 00:11:17.997 "compare_and_write": false, 00:11:17.998 "abort": true, 00:11:17.998 "seek_hole": false, 00:11:17.998 "seek_data": false, 00:11:17.998 "copy": true, 00:11:17.998 "nvme_iov_md": false 00:11:17.998 }, 00:11:17.998 "memory_domains": [ 00:11:17.998 { 00:11:17.998 "dma_device_id": "system", 00:11:17.998 "dma_device_type": 1 00:11:17.998 }, 00:11:17.998 { 00:11:17.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:17.998 "dma_device_type": 2 00:11:17.998 } 00:11:17.998 ], 00:11:17.998 "driver_specific": {} 00:11:17.998 } 00:11:17.998 ] 00:11:17.998 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.998 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:17.998 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:17.998 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:17.998 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:17.998 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.998 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.998 [2024-11-27 14:10:55.144514] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:17.998 [2024-11-27 14:10:55.144753] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:17.998 [2024-11-27 14:10:55.144916] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:17.998 [2024-11-27 14:10:55.147407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:17.998 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.998 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:17.998 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:17.998 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:17.998 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:17.998 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:17.998 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:17.998 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:17.998 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:17.998 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:17.998 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:17.998 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:17.998 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:17.998 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:17.998 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.998 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:17.998 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:17.998 "name": "Existed_Raid", 00:11:17.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.998 "strip_size_kb": 0, 00:11:17.998 "state": "configuring", 00:11:17.998 "raid_level": "raid1", 00:11:17.998 "superblock": false, 00:11:17.998 "num_base_bdevs": 3, 00:11:17.998 "num_base_bdevs_discovered": 2, 00:11:17.998 "num_base_bdevs_operational": 3, 00:11:17.998 "base_bdevs_list": [ 00:11:17.998 { 00:11:17.998 "name": "BaseBdev1", 00:11:17.998 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:17.998 "is_configured": false, 00:11:17.998 "data_offset": 0, 00:11:17.998 "data_size": 0 00:11:17.998 }, 00:11:17.998 { 00:11:17.998 "name": "BaseBdev2", 00:11:17.998 "uuid": "f9f38881-2277-4eee-b059-8e3fd8bd07b9", 00:11:17.998 "is_configured": true, 00:11:17.998 "data_offset": 0, 00:11:17.998 "data_size": 65536 00:11:17.998 }, 00:11:17.998 { 00:11:17.998 "name": "BaseBdev3", 00:11:17.998 "uuid": "27ed476b-cd63-4e91-9dfc-338dccc079da", 00:11:17.998 "is_configured": true, 00:11:17.998 "data_offset": 0, 00:11:17.998 "data_size": 65536 00:11:17.998 } 00:11:17.998 ] 00:11:17.998 }' 00:11:17.998 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:17.998 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.566 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:18.566 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.566 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.566 [2024-11-27 14:10:55.681145] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:18.566 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.566 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:18.566 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:18.566 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:18.566 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:18.566 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:18.566 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:18.566 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:18.566 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:18.566 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:18.566 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:18.566 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:18.566 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:18.566 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:18.566 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.566 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:18.566 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:18.566 "name": "Existed_Raid", 00:11:18.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.566 "strip_size_kb": 0, 00:11:18.566 "state": "configuring", 00:11:18.566 "raid_level": "raid1", 00:11:18.566 "superblock": false, 00:11:18.566 "num_base_bdevs": 3, 00:11:18.566 "num_base_bdevs_discovered": 1, 00:11:18.566 "num_base_bdevs_operational": 3, 00:11:18.566 "base_bdevs_list": [ 00:11:18.566 { 00:11:18.566 "name": "BaseBdev1", 00:11:18.566 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:18.566 "is_configured": false, 00:11:18.566 "data_offset": 0, 00:11:18.566 "data_size": 0 00:11:18.566 }, 00:11:18.566 { 00:11:18.566 "name": null, 00:11:18.566 "uuid": "f9f38881-2277-4eee-b059-8e3fd8bd07b9", 00:11:18.566 "is_configured": false, 00:11:18.566 "data_offset": 0, 00:11:18.566 "data_size": 65536 00:11:18.566 }, 00:11:18.566 { 00:11:18.566 "name": "BaseBdev3", 00:11:18.566 "uuid": "27ed476b-cd63-4e91-9dfc-338dccc079da", 00:11:18.566 "is_configured": true, 00:11:18.566 "data_offset": 0, 00:11:18.566 "data_size": 65536 00:11:18.566 } 00:11:18.566 ] 00:11:18.566 }' 00:11:18.566 14:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:18.566 14:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.134 [2024-11-27 14:10:56.284979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:19.134 BaseBdev1 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.134 [ 00:11:19.134 { 00:11:19.134 "name": "BaseBdev1", 00:11:19.134 "aliases": [ 00:11:19.134 "644b326a-19d7-4b03-a878-c0928dd65217" 00:11:19.134 ], 00:11:19.134 "product_name": "Malloc disk", 00:11:19.134 "block_size": 512, 00:11:19.134 "num_blocks": 65536, 00:11:19.134 "uuid": "644b326a-19d7-4b03-a878-c0928dd65217", 00:11:19.134 "assigned_rate_limits": { 00:11:19.134 "rw_ios_per_sec": 0, 00:11:19.134 "rw_mbytes_per_sec": 0, 00:11:19.134 "r_mbytes_per_sec": 0, 00:11:19.134 "w_mbytes_per_sec": 0 00:11:19.134 }, 00:11:19.134 "claimed": true, 00:11:19.134 "claim_type": "exclusive_write", 00:11:19.134 "zoned": false, 00:11:19.134 "supported_io_types": { 00:11:19.134 "read": true, 00:11:19.134 "write": true, 00:11:19.134 "unmap": true, 00:11:19.134 "flush": true, 00:11:19.134 "reset": true, 00:11:19.134 "nvme_admin": false, 00:11:19.134 "nvme_io": false, 00:11:19.134 "nvme_io_md": false, 00:11:19.134 "write_zeroes": true, 00:11:19.134 "zcopy": true, 00:11:19.134 "get_zone_info": false, 00:11:19.134 "zone_management": false, 00:11:19.134 "zone_append": false, 00:11:19.134 "compare": false, 00:11:19.134 "compare_and_write": false, 00:11:19.134 "abort": true, 00:11:19.134 "seek_hole": false, 00:11:19.134 "seek_data": false, 00:11:19.134 "copy": true, 00:11:19.134 "nvme_iov_md": false 00:11:19.134 }, 00:11:19.134 "memory_domains": [ 00:11:19.134 { 00:11:19.134 "dma_device_id": "system", 00:11:19.134 "dma_device_type": 1 00:11:19.134 }, 00:11:19.134 { 00:11:19.134 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.134 "dma_device_type": 2 00:11:19.134 } 00:11:19.134 ], 00:11:19.134 "driver_specific": {} 00:11:19.134 } 00:11:19.134 ] 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.134 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.134 "name": "Existed_Raid", 00:11:19.134 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.134 "strip_size_kb": 0, 00:11:19.134 "state": "configuring", 00:11:19.134 "raid_level": "raid1", 00:11:19.134 "superblock": false, 00:11:19.134 "num_base_bdevs": 3, 00:11:19.134 "num_base_bdevs_discovered": 2, 00:11:19.134 "num_base_bdevs_operational": 3, 00:11:19.134 "base_bdevs_list": [ 00:11:19.134 { 00:11:19.134 "name": "BaseBdev1", 00:11:19.134 "uuid": "644b326a-19d7-4b03-a878-c0928dd65217", 00:11:19.134 "is_configured": true, 00:11:19.134 "data_offset": 0, 00:11:19.134 "data_size": 65536 00:11:19.134 }, 00:11:19.134 { 00:11:19.134 "name": null, 00:11:19.135 "uuid": "f9f38881-2277-4eee-b059-8e3fd8bd07b9", 00:11:19.135 "is_configured": false, 00:11:19.135 "data_offset": 0, 00:11:19.135 "data_size": 65536 00:11:19.135 }, 00:11:19.135 { 00:11:19.135 "name": "BaseBdev3", 00:11:19.135 "uuid": "27ed476b-cd63-4e91-9dfc-338dccc079da", 00:11:19.135 "is_configured": true, 00:11:19.135 "data_offset": 0, 00:11:19.135 "data_size": 65536 00:11:19.135 } 00:11:19.135 ] 00:11:19.135 }' 00:11:19.135 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.135 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.703 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.703 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:19.703 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.703 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.703 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.703 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:19.703 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:19.703 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.703 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.703 [2024-11-27 14:10:56.905185] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:19.703 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.703 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:19.703 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:19.703 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:19.703 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:19.703 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:19.703 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:19.703 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:19.703 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:19.703 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:19.703 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:19.703 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:19.703 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:19.703 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:19.703 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:19.703 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:19.703 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:19.703 "name": "Existed_Raid", 00:11:19.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:19.703 "strip_size_kb": 0, 00:11:19.703 "state": "configuring", 00:11:19.703 "raid_level": "raid1", 00:11:19.703 "superblock": false, 00:11:19.703 "num_base_bdevs": 3, 00:11:19.703 "num_base_bdevs_discovered": 1, 00:11:19.703 "num_base_bdevs_operational": 3, 00:11:19.703 "base_bdevs_list": [ 00:11:19.703 { 00:11:19.703 "name": "BaseBdev1", 00:11:19.703 "uuid": "644b326a-19d7-4b03-a878-c0928dd65217", 00:11:19.703 "is_configured": true, 00:11:19.703 "data_offset": 0, 00:11:19.703 "data_size": 65536 00:11:19.703 }, 00:11:19.703 { 00:11:19.703 "name": null, 00:11:19.703 "uuid": "f9f38881-2277-4eee-b059-8e3fd8bd07b9", 00:11:19.703 "is_configured": false, 00:11:19.703 "data_offset": 0, 00:11:19.703 "data_size": 65536 00:11:19.703 }, 00:11:19.703 { 00:11:19.703 "name": null, 00:11:19.703 "uuid": "27ed476b-cd63-4e91-9dfc-338dccc079da", 00:11:19.703 "is_configured": false, 00:11:19.703 "data_offset": 0, 00:11:19.703 "data_size": 65536 00:11:19.703 } 00:11:19.703 ] 00:11:19.703 }' 00:11:19.703 14:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:19.703 14:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.271 14:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.271 14:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.271 14:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.271 14:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:20.271 14:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.271 14:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:20.271 14:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:20.271 14:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.271 14:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.271 [2024-11-27 14:10:57.485407] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:20.271 14:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.271 14:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:20.271 14:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:20.271 14:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:20.271 14:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:20.271 14:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:20.271 14:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:20.271 14:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:20.271 14:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:20.271 14:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:20.271 14:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:20.271 14:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.271 14:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:20.271 14:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.271 14:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.271 14:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.271 14:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:20.271 "name": "Existed_Raid", 00:11:20.271 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:20.271 "strip_size_kb": 0, 00:11:20.271 "state": "configuring", 00:11:20.271 "raid_level": "raid1", 00:11:20.271 "superblock": false, 00:11:20.271 "num_base_bdevs": 3, 00:11:20.271 "num_base_bdevs_discovered": 2, 00:11:20.271 "num_base_bdevs_operational": 3, 00:11:20.271 "base_bdevs_list": [ 00:11:20.271 { 00:11:20.271 "name": "BaseBdev1", 00:11:20.271 "uuid": "644b326a-19d7-4b03-a878-c0928dd65217", 00:11:20.271 "is_configured": true, 00:11:20.271 "data_offset": 0, 00:11:20.271 "data_size": 65536 00:11:20.271 }, 00:11:20.271 { 00:11:20.271 "name": null, 00:11:20.271 "uuid": "f9f38881-2277-4eee-b059-8e3fd8bd07b9", 00:11:20.271 "is_configured": false, 00:11:20.271 "data_offset": 0, 00:11:20.271 "data_size": 65536 00:11:20.271 }, 00:11:20.271 { 00:11:20.271 "name": "BaseBdev3", 00:11:20.271 "uuid": "27ed476b-cd63-4e91-9dfc-338dccc079da", 00:11:20.271 "is_configured": true, 00:11:20.271 "data_offset": 0, 00:11:20.271 "data_size": 65536 00:11:20.271 } 00:11:20.271 ] 00:11:20.271 }' 00:11:20.271 14:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:20.271 14:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.839 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:20.839 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:20.839 14:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.839 14:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.839 14:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.839 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:20.839 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:20.839 14:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.839 14:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.839 [2024-11-27 14:10:58.057644] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:21.097 14:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.097 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:21.097 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.097 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.097 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.097 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.097 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:21.097 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.097 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.097 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.097 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.097 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.097 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.097 14:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.097 14:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.097 14:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.097 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.097 "name": "Existed_Raid", 00:11:21.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.097 "strip_size_kb": 0, 00:11:21.097 "state": "configuring", 00:11:21.097 "raid_level": "raid1", 00:11:21.097 "superblock": false, 00:11:21.097 "num_base_bdevs": 3, 00:11:21.097 "num_base_bdevs_discovered": 1, 00:11:21.097 "num_base_bdevs_operational": 3, 00:11:21.097 "base_bdevs_list": [ 00:11:21.097 { 00:11:21.097 "name": null, 00:11:21.097 "uuid": "644b326a-19d7-4b03-a878-c0928dd65217", 00:11:21.097 "is_configured": false, 00:11:21.097 "data_offset": 0, 00:11:21.097 "data_size": 65536 00:11:21.097 }, 00:11:21.097 { 00:11:21.097 "name": null, 00:11:21.097 "uuid": "f9f38881-2277-4eee-b059-8e3fd8bd07b9", 00:11:21.097 "is_configured": false, 00:11:21.097 "data_offset": 0, 00:11:21.097 "data_size": 65536 00:11:21.097 }, 00:11:21.097 { 00:11:21.097 "name": "BaseBdev3", 00:11:21.097 "uuid": "27ed476b-cd63-4e91-9dfc-338dccc079da", 00:11:21.097 "is_configured": true, 00:11:21.097 "data_offset": 0, 00:11:21.097 "data_size": 65536 00:11:21.097 } 00:11:21.097 ] 00:11:21.097 }' 00:11:21.097 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.097 14:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.665 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.665 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:21.665 14:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.665 14:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.665 14:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.665 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:21.665 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:21.665 14:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.665 14:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.665 [2024-11-27 14:10:58.721158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:21.665 14:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.665 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:21.665 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:21.665 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:21.665 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:21.665 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:21.665 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:21.665 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:21.665 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:21.665 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:21.665 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:21.665 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:21.665 14:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.665 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:21.665 14:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:21.665 14:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.665 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:21.665 "name": "Existed_Raid", 00:11:21.665 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:21.665 "strip_size_kb": 0, 00:11:21.665 "state": "configuring", 00:11:21.665 "raid_level": "raid1", 00:11:21.665 "superblock": false, 00:11:21.665 "num_base_bdevs": 3, 00:11:21.665 "num_base_bdevs_discovered": 2, 00:11:21.665 "num_base_bdevs_operational": 3, 00:11:21.665 "base_bdevs_list": [ 00:11:21.665 { 00:11:21.665 "name": null, 00:11:21.665 "uuid": "644b326a-19d7-4b03-a878-c0928dd65217", 00:11:21.665 "is_configured": false, 00:11:21.665 "data_offset": 0, 00:11:21.665 "data_size": 65536 00:11:21.665 }, 00:11:21.665 { 00:11:21.665 "name": "BaseBdev2", 00:11:21.665 "uuid": "f9f38881-2277-4eee-b059-8e3fd8bd07b9", 00:11:21.665 "is_configured": true, 00:11:21.665 "data_offset": 0, 00:11:21.665 "data_size": 65536 00:11:21.665 }, 00:11:21.665 { 00:11:21.665 "name": "BaseBdev3", 00:11:21.665 "uuid": "27ed476b-cd63-4e91-9dfc-338dccc079da", 00:11:21.665 "is_configured": true, 00:11:21.665 "data_offset": 0, 00:11:21.665 "data_size": 65536 00:11:21.665 } 00:11:21.665 ] 00:11:21.665 }' 00:11:21.665 14:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:21.665 14:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.234 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.234 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:22.234 14:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.234 14:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.234 14:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.234 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:22.234 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.234 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:22.234 14:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.234 14:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.234 14:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.234 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 644b326a-19d7-4b03-a878-c0928dd65217 00:11:22.234 14:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.234 14:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.234 [2024-11-27 14:10:59.400008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:22.234 [2024-11-27 14:10:59.400067] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:22.234 [2024-11-27 14:10:59.400080] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:22.234 [2024-11-27 14:10:59.400397] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:22.234 [2024-11-27 14:10:59.400589] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:22.234 [2024-11-27 14:10:59.400610] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:22.234 [2024-11-27 14:10:59.400920] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:22.234 NewBaseBdev 00:11:22.234 14:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.234 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:22.234 14:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:22.234 14:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:22.234 14:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:22.234 14:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:22.234 14:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:22.234 14:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:22.234 14:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.234 14:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.234 14:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.234 14:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:22.234 14:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.234 14:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.234 [ 00:11:22.234 { 00:11:22.234 "name": "NewBaseBdev", 00:11:22.234 "aliases": [ 00:11:22.234 "644b326a-19d7-4b03-a878-c0928dd65217" 00:11:22.234 ], 00:11:22.234 "product_name": "Malloc disk", 00:11:22.234 "block_size": 512, 00:11:22.234 "num_blocks": 65536, 00:11:22.234 "uuid": "644b326a-19d7-4b03-a878-c0928dd65217", 00:11:22.234 "assigned_rate_limits": { 00:11:22.234 "rw_ios_per_sec": 0, 00:11:22.234 "rw_mbytes_per_sec": 0, 00:11:22.234 "r_mbytes_per_sec": 0, 00:11:22.234 "w_mbytes_per_sec": 0 00:11:22.234 }, 00:11:22.234 "claimed": true, 00:11:22.234 "claim_type": "exclusive_write", 00:11:22.234 "zoned": false, 00:11:22.234 "supported_io_types": { 00:11:22.234 "read": true, 00:11:22.234 "write": true, 00:11:22.234 "unmap": true, 00:11:22.234 "flush": true, 00:11:22.234 "reset": true, 00:11:22.234 "nvme_admin": false, 00:11:22.234 "nvme_io": false, 00:11:22.234 "nvme_io_md": false, 00:11:22.234 "write_zeroes": true, 00:11:22.234 "zcopy": true, 00:11:22.234 "get_zone_info": false, 00:11:22.234 "zone_management": false, 00:11:22.234 "zone_append": false, 00:11:22.234 "compare": false, 00:11:22.234 "compare_and_write": false, 00:11:22.234 "abort": true, 00:11:22.234 "seek_hole": false, 00:11:22.234 "seek_data": false, 00:11:22.234 "copy": true, 00:11:22.234 "nvme_iov_md": false 00:11:22.234 }, 00:11:22.234 "memory_domains": [ 00:11:22.234 { 00:11:22.234 "dma_device_id": "system", 00:11:22.234 "dma_device_type": 1 00:11:22.234 }, 00:11:22.234 { 00:11:22.234 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.234 "dma_device_type": 2 00:11:22.234 } 00:11:22.234 ], 00:11:22.234 "driver_specific": {} 00:11:22.234 } 00:11:22.234 ] 00:11:22.234 14:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.234 14:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:22.235 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:22.235 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:22.235 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:22.235 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:22.235 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:22.235 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:22.235 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:22.235 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:22.235 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:22.235 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:22.235 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:22.235 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:22.235 14:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.235 14:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.235 14:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.235 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:22.235 "name": "Existed_Raid", 00:11:22.235 "uuid": "73f68f94-7299-4515-a9f5-31ab89d9ba6d", 00:11:22.235 "strip_size_kb": 0, 00:11:22.235 "state": "online", 00:11:22.235 "raid_level": "raid1", 00:11:22.235 "superblock": false, 00:11:22.235 "num_base_bdevs": 3, 00:11:22.235 "num_base_bdevs_discovered": 3, 00:11:22.235 "num_base_bdevs_operational": 3, 00:11:22.235 "base_bdevs_list": [ 00:11:22.235 { 00:11:22.235 "name": "NewBaseBdev", 00:11:22.235 "uuid": "644b326a-19d7-4b03-a878-c0928dd65217", 00:11:22.235 "is_configured": true, 00:11:22.235 "data_offset": 0, 00:11:22.235 "data_size": 65536 00:11:22.235 }, 00:11:22.235 { 00:11:22.235 "name": "BaseBdev2", 00:11:22.235 "uuid": "f9f38881-2277-4eee-b059-8e3fd8bd07b9", 00:11:22.235 "is_configured": true, 00:11:22.235 "data_offset": 0, 00:11:22.235 "data_size": 65536 00:11:22.235 }, 00:11:22.235 { 00:11:22.235 "name": "BaseBdev3", 00:11:22.235 "uuid": "27ed476b-cd63-4e91-9dfc-338dccc079da", 00:11:22.235 "is_configured": true, 00:11:22.235 "data_offset": 0, 00:11:22.235 "data_size": 65536 00:11:22.235 } 00:11:22.235 ] 00:11:22.235 }' 00:11:22.235 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:22.235 14:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.803 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:22.803 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:22.803 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:22.803 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:22.803 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:22.803 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:22.803 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:22.803 14:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:22.803 14:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:22.803 14:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:22.803 [2024-11-27 14:10:59.960656] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:22.803 14:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:22.803 14:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:22.803 "name": "Existed_Raid", 00:11:22.803 "aliases": [ 00:11:22.803 "73f68f94-7299-4515-a9f5-31ab89d9ba6d" 00:11:22.803 ], 00:11:22.804 "product_name": "Raid Volume", 00:11:22.804 "block_size": 512, 00:11:22.804 "num_blocks": 65536, 00:11:22.804 "uuid": "73f68f94-7299-4515-a9f5-31ab89d9ba6d", 00:11:22.804 "assigned_rate_limits": { 00:11:22.804 "rw_ios_per_sec": 0, 00:11:22.804 "rw_mbytes_per_sec": 0, 00:11:22.804 "r_mbytes_per_sec": 0, 00:11:22.804 "w_mbytes_per_sec": 0 00:11:22.804 }, 00:11:22.804 "claimed": false, 00:11:22.804 "zoned": false, 00:11:22.804 "supported_io_types": { 00:11:22.804 "read": true, 00:11:22.804 "write": true, 00:11:22.804 "unmap": false, 00:11:22.804 "flush": false, 00:11:22.804 "reset": true, 00:11:22.804 "nvme_admin": false, 00:11:22.804 "nvme_io": false, 00:11:22.804 "nvme_io_md": false, 00:11:22.804 "write_zeroes": true, 00:11:22.804 "zcopy": false, 00:11:22.804 "get_zone_info": false, 00:11:22.804 "zone_management": false, 00:11:22.804 "zone_append": false, 00:11:22.804 "compare": false, 00:11:22.804 "compare_and_write": false, 00:11:22.804 "abort": false, 00:11:22.804 "seek_hole": false, 00:11:22.804 "seek_data": false, 00:11:22.804 "copy": false, 00:11:22.804 "nvme_iov_md": false 00:11:22.804 }, 00:11:22.804 "memory_domains": [ 00:11:22.804 { 00:11:22.804 "dma_device_id": "system", 00:11:22.804 "dma_device_type": 1 00:11:22.804 }, 00:11:22.804 { 00:11:22.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.804 "dma_device_type": 2 00:11:22.804 }, 00:11:22.804 { 00:11:22.804 "dma_device_id": "system", 00:11:22.804 "dma_device_type": 1 00:11:22.804 }, 00:11:22.804 { 00:11:22.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.804 "dma_device_type": 2 00:11:22.804 }, 00:11:22.804 { 00:11:22.804 "dma_device_id": "system", 00:11:22.804 "dma_device_type": 1 00:11:22.804 }, 00:11:22.804 { 00:11:22.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:22.804 "dma_device_type": 2 00:11:22.804 } 00:11:22.804 ], 00:11:22.804 "driver_specific": { 00:11:22.804 "raid": { 00:11:22.804 "uuid": "73f68f94-7299-4515-a9f5-31ab89d9ba6d", 00:11:22.804 "strip_size_kb": 0, 00:11:22.804 "state": "online", 00:11:22.804 "raid_level": "raid1", 00:11:22.804 "superblock": false, 00:11:22.804 "num_base_bdevs": 3, 00:11:22.804 "num_base_bdevs_discovered": 3, 00:11:22.804 "num_base_bdevs_operational": 3, 00:11:22.804 "base_bdevs_list": [ 00:11:22.804 { 00:11:22.804 "name": "NewBaseBdev", 00:11:22.804 "uuid": "644b326a-19d7-4b03-a878-c0928dd65217", 00:11:22.804 "is_configured": true, 00:11:22.804 "data_offset": 0, 00:11:22.804 "data_size": 65536 00:11:22.804 }, 00:11:22.804 { 00:11:22.804 "name": "BaseBdev2", 00:11:22.804 "uuid": "f9f38881-2277-4eee-b059-8e3fd8bd07b9", 00:11:22.804 "is_configured": true, 00:11:22.804 "data_offset": 0, 00:11:22.804 "data_size": 65536 00:11:22.804 }, 00:11:22.804 { 00:11:22.804 "name": "BaseBdev3", 00:11:22.804 "uuid": "27ed476b-cd63-4e91-9dfc-338dccc079da", 00:11:22.804 "is_configured": true, 00:11:22.804 "data_offset": 0, 00:11:22.804 "data_size": 65536 00:11:22.804 } 00:11:22.804 ] 00:11:22.804 } 00:11:22.804 } 00:11:22.804 }' 00:11:22.804 14:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:22.804 14:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:22.804 BaseBdev2 00:11:22.804 BaseBdev3' 00:11:22.804 14:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.063 14:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:23.063 14:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.063 14:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:23.063 14:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.063 14:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.063 14:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.063 14:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.063 14:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.063 14:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.063 14:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.063 14:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.063 14:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:23.063 14:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.063 14:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.063 14:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.063 14:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.063 14:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.063 14:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:23.063 14:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:23.063 14:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:23.063 14:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.063 14:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.063 14:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.063 14:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:23.063 14:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:23.063 14:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:23.063 14:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.063 14:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.063 [2024-11-27 14:11:00.284327] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:23.063 [2024-11-27 14:11:00.284369] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:23.063 [2024-11-27 14:11:00.284450] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:23.063 [2024-11-27 14:11:00.284842] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:23.063 [2024-11-27 14:11:00.284860] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:23.063 14:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.063 14:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 67342 00:11:23.063 14:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 67342 ']' 00:11:23.063 14:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 67342 00:11:23.063 14:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:11:23.064 14:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:23.064 14:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67342 00:11:23.064 killing process with pid 67342 00:11:23.064 14:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:23.064 14:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:23.064 14:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67342' 00:11:23.064 14:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 67342 00:11:23.064 [2024-11-27 14:11:00.325787] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:23.064 14:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 67342 00:11:23.322 [2024-11-27 14:11:00.593304] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:24.698 ************************************ 00:11:24.698 END TEST raid_state_function_test 00:11:24.698 ************************************ 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:11:24.698 00:11:24.698 real 0m11.973s 00:11:24.698 user 0m19.920s 00:11:24.698 sys 0m1.677s 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:24.698 14:11:01 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:11:24.698 14:11:01 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:24.698 14:11:01 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:24.698 14:11:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:24.698 ************************************ 00:11:24.698 START TEST raid_state_function_test_sb 00:11:24.698 ************************************ 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 3 true 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:11:24.698 Process raid pid: 67980 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=67980 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 67980' 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 67980 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 67980 ']' 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:24.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:24.698 14:11:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:24.698 [2024-11-27 14:11:01.745300] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:11:24.698 [2024-11-27 14:11:01.745469] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:24.698 [2024-11-27 14:11:01.921929] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:24.956 [2024-11-27 14:11:02.052778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:25.214 [2024-11-27 14:11:02.255149] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:25.214 [2024-11-27 14:11:02.255411] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:25.473 14:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:25.473 14:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:11:25.473 14:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:25.473 14:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.473 14:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.473 [2024-11-27 14:11:02.741112] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:25.473 [2024-11-27 14:11:02.741174] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:25.473 [2024-11-27 14:11:02.741191] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:25.473 [2024-11-27 14:11:02.741208] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:25.473 [2024-11-27 14:11:02.741218] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:25.473 [2024-11-27 14:11:02.741232] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:25.473 14:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.473 14:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:25.473 14:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:25.473 14:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:25.473 14:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:25.473 14:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:25.473 14:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:25.473 14:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:25.473 14:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:25.473 14:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:25.473 14:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:25.731 14:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:25.731 14:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:25.731 14:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:25.731 14:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:25.731 14:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:25.731 14:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:25.731 "name": "Existed_Raid", 00:11:25.731 "uuid": "6d3835e8-0e6c-4132-b1bf-2fcb66e1007a", 00:11:25.731 "strip_size_kb": 0, 00:11:25.731 "state": "configuring", 00:11:25.731 "raid_level": "raid1", 00:11:25.731 "superblock": true, 00:11:25.731 "num_base_bdevs": 3, 00:11:25.731 "num_base_bdevs_discovered": 0, 00:11:25.731 "num_base_bdevs_operational": 3, 00:11:25.731 "base_bdevs_list": [ 00:11:25.731 { 00:11:25.731 "name": "BaseBdev1", 00:11:25.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.731 "is_configured": false, 00:11:25.731 "data_offset": 0, 00:11:25.731 "data_size": 0 00:11:25.731 }, 00:11:25.731 { 00:11:25.731 "name": "BaseBdev2", 00:11:25.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.731 "is_configured": false, 00:11:25.731 "data_offset": 0, 00:11:25.731 "data_size": 0 00:11:25.731 }, 00:11:25.731 { 00:11:25.731 "name": "BaseBdev3", 00:11:25.731 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:25.731 "is_configured": false, 00:11:25.731 "data_offset": 0, 00:11:25.731 "data_size": 0 00:11:25.731 } 00:11:25.731 ] 00:11:25.731 }' 00:11:25.731 14:11:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:25.731 14:11:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.299 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:26.299 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.299 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.299 [2024-11-27 14:11:03.289237] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:26.299 [2024-11-27 14:11:03.289280] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:26.299 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.299 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:26.299 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.299 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.299 [2024-11-27 14:11:03.297239] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:26.299 [2024-11-27 14:11:03.297432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:26.299 [2024-11-27 14:11:03.297588] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:26.299 [2024-11-27 14:11:03.297651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:26.299 [2024-11-27 14:11:03.297811] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:26.299 [2024-11-27 14:11:03.297870] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:26.299 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.299 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:26.299 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.299 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.299 [2024-11-27 14:11:03.347032] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:26.299 BaseBdev1 00:11:26.299 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.299 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:26.299 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:26.299 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:26.299 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:26.299 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:26.299 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:26.299 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:26.299 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.299 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.299 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.299 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:26.299 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.299 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.299 [ 00:11:26.299 { 00:11:26.299 "name": "BaseBdev1", 00:11:26.299 "aliases": [ 00:11:26.299 "fe24cd30-728f-4d3a-88dd-bbe0421496f8" 00:11:26.299 ], 00:11:26.299 "product_name": "Malloc disk", 00:11:26.299 "block_size": 512, 00:11:26.299 "num_blocks": 65536, 00:11:26.299 "uuid": "fe24cd30-728f-4d3a-88dd-bbe0421496f8", 00:11:26.299 "assigned_rate_limits": { 00:11:26.299 "rw_ios_per_sec": 0, 00:11:26.299 "rw_mbytes_per_sec": 0, 00:11:26.299 "r_mbytes_per_sec": 0, 00:11:26.299 "w_mbytes_per_sec": 0 00:11:26.299 }, 00:11:26.299 "claimed": true, 00:11:26.299 "claim_type": "exclusive_write", 00:11:26.299 "zoned": false, 00:11:26.299 "supported_io_types": { 00:11:26.299 "read": true, 00:11:26.299 "write": true, 00:11:26.299 "unmap": true, 00:11:26.299 "flush": true, 00:11:26.299 "reset": true, 00:11:26.299 "nvme_admin": false, 00:11:26.299 "nvme_io": false, 00:11:26.299 "nvme_io_md": false, 00:11:26.299 "write_zeroes": true, 00:11:26.299 "zcopy": true, 00:11:26.299 "get_zone_info": false, 00:11:26.299 "zone_management": false, 00:11:26.299 "zone_append": false, 00:11:26.299 "compare": false, 00:11:26.299 "compare_and_write": false, 00:11:26.299 "abort": true, 00:11:26.299 "seek_hole": false, 00:11:26.299 "seek_data": false, 00:11:26.299 "copy": true, 00:11:26.299 "nvme_iov_md": false 00:11:26.299 }, 00:11:26.299 "memory_domains": [ 00:11:26.299 { 00:11:26.299 "dma_device_id": "system", 00:11:26.299 "dma_device_type": 1 00:11:26.299 }, 00:11:26.299 { 00:11:26.300 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:26.300 "dma_device_type": 2 00:11:26.300 } 00:11:26.300 ], 00:11:26.300 "driver_specific": {} 00:11:26.300 } 00:11:26.300 ] 00:11:26.300 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.300 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:26.300 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:26.300 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.300 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.300 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.300 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.300 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:26.300 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.300 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.300 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.300 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.300 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.300 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.300 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.300 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.300 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.300 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.300 "name": "Existed_Raid", 00:11:26.300 "uuid": "7e5177bb-792f-462e-9945-6dd4c5f5d49d", 00:11:26.300 "strip_size_kb": 0, 00:11:26.300 "state": "configuring", 00:11:26.300 "raid_level": "raid1", 00:11:26.300 "superblock": true, 00:11:26.300 "num_base_bdevs": 3, 00:11:26.300 "num_base_bdevs_discovered": 1, 00:11:26.300 "num_base_bdevs_operational": 3, 00:11:26.300 "base_bdevs_list": [ 00:11:26.300 { 00:11:26.300 "name": "BaseBdev1", 00:11:26.300 "uuid": "fe24cd30-728f-4d3a-88dd-bbe0421496f8", 00:11:26.300 "is_configured": true, 00:11:26.300 "data_offset": 2048, 00:11:26.300 "data_size": 63488 00:11:26.300 }, 00:11:26.300 { 00:11:26.300 "name": "BaseBdev2", 00:11:26.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.300 "is_configured": false, 00:11:26.300 "data_offset": 0, 00:11:26.300 "data_size": 0 00:11:26.300 }, 00:11:26.300 { 00:11:26.300 "name": "BaseBdev3", 00:11:26.300 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.300 "is_configured": false, 00:11:26.300 "data_offset": 0, 00:11:26.300 "data_size": 0 00:11:26.300 } 00:11:26.300 ] 00:11:26.300 }' 00:11:26.300 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.300 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.868 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:26.868 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.868 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.868 [2024-11-27 14:11:03.915248] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:26.868 [2024-11-27 14:11:03.915312] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:26.868 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.868 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:26.868 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.868 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.868 [2024-11-27 14:11:03.923283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:26.868 [2024-11-27 14:11:03.926046] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:26.868 [2024-11-27 14:11:03.926109] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:26.868 [2024-11-27 14:11:03.926129] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:26.868 [2024-11-27 14:11:03.926148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:26.868 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.868 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:26.868 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:26.868 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:26.868 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:26.868 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:26.868 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:26.868 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:26.868 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:26.868 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:26.868 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:26.868 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:26.868 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:26.868 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:26.868 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:26.868 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:26.868 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:26.868 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:26.868 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:26.868 "name": "Existed_Raid", 00:11:26.868 "uuid": "6d573ef2-1e21-45c6-9209-37ceb566f583", 00:11:26.868 "strip_size_kb": 0, 00:11:26.868 "state": "configuring", 00:11:26.868 "raid_level": "raid1", 00:11:26.868 "superblock": true, 00:11:26.868 "num_base_bdevs": 3, 00:11:26.868 "num_base_bdevs_discovered": 1, 00:11:26.868 "num_base_bdevs_operational": 3, 00:11:26.868 "base_bdevs_list": [ 00:11:26.868 { 00:11:26.868 "name": "BaseBdev1", 00:11:26.868 "uuid": "fe24cd30-728f-4d3a-88dd-bbe0421496f8", 00:11:26.868 "is_configured": true, 00:11:26.868 "data_offset": 2048, 00:11:26.868 "data_size": 63488 00:11:26.868 }, 00:11:26.868 { 00:11:26.868 "name": "BaseBdev2", 00:11:26.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.868 "is_configured": false, 00:11:26.868 "data_offset": 0, 00:11:26.868 "data_size": 0 00:11:26.868 }, 00:11:26.868 { 00:11:26.868 "name": "BaseBdev3", 00:11:26.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:26.868 "is_configured": false, 00:11:26.868 "data_offset": 0, 00:11:26.868 "data_size": 0 00:11:26.868 } 00:11:26.868 ] 00:11:26.868 }' 00:11:26.868 14:11:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:26.868 14:11:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.435 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:27.435 14:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.435 14:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.435 [2024-11-27 14:11:04.472426] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:27.435 BaseBdev2 00:11:27.435 14:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.435 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:27.435 14:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:27.435 14:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:27.435 14:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:27.435 14:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:27.435 14:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:27.435 14:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:27.435 14:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.435 14:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.435 14:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.435 14:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:27.435 14:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.435 14:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.435 [ 00:11:27.435 { 00:11:27.435 "name": "BaseBdev2", 00:11:27.435 "aliases": [ 00:11:27.435 "fe6625b3-e4b4-4ad5-bf8a-373909ad4a53" 00:11:27.435 ], 00:11:27.435 "product_name": "Malloc disk", 00:11:27.435 "block_size": 512, 00:11:27.435 "num_blocks": 65536, 00:11:27.435 "uuid": "fe6625b3-e4b4-4ad5-bf8a-373909ad4a53", 00:11:27.435 "assigned_rate_limits": { 00:11:27.435 "rw_ios_per_sec": 0, 00:11:27.435 "rw_mbytes_per_sec": 0, 00:11:27.435 "r_mbytes_per_sec": 0, 00:11:27.435 "w_mbytes_per_sec": 0 00:11:27.435 }, 00:11:27.435 "claimed": true, 00:11:27.435 "claim_type": "exclusive_write", 00:11:27.435 "zoned": false, 00:11:27.435 "supported_io_types": { 00:11:27.435 "read": true, 00:11:27.435 "write": true, 00:11:27.435 "unmap": true, 00:11:27.435 "flush": true, 00:11:27.435 "reset": true, 00:11:27.435 "nvme_admin": false, 00:11:27.435 "nvme_io": false, 00:11:27.435 "nvme_io_md": false, 00:11:27.435 "write_zeroes": true, 00:11:27.435 "zcopy": true, 00:11:27.435 "get_zone_info": false, 00:11:27.435 "zone_management": false, 00:11:27.435 "zone_append": false, 00:11:27.435 "compare": false, 00:11:27.435 "compare_and_write": false, 00:11:27.435 "abort": true, 00:11:27.435 "seek_hole": false, 00:11:27.435 "seek_data": false, 00:11:27.435 "copy": true, 00:11:27.435 "nvme_iov_md": false 00:11:27.435 }, 00:11:27.435 "memory_domains": [ 00:11:27.435 { 00:11:27.435 "dma_device_id": "system", 00:11:27.435 "dma_device_type": 1 00:11:27.435 }, 00:11:27.435 { 00:11:27.435 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:27.435 "dma_device_type": 2 00:11:27.435 } 00:11:27.435 ], 00:11:27.435 "driver_specific": {} 00:11:27.435 } 00:11:27.435 ] 00:11:27.435 14:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.435 14:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:27.435 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:27.435 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:27.435 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:27.435 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:27.435 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:27.435 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:27.435 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:27.435 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:27.435 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:27.435 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:27.435 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:27.435 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:27.435 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:27.436 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:27.436 14:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.436 14:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:27.436 14:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.436 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:27.436 "name": "Existed_Raid", 00:11:27.436 "uuid": "6d573ef2-1e21-45c6-9209-37ceb566f583", 00:11:27.436 "strip_size_kb": 0, 00:11:27.436 "state": "configuring", 00:11:27.436 "raid_level": "raid1", 00:11:27.436 "superblock": true, 00:11:27.436 "num_base_bdevs": 3, 00:11:27.436 "num_base_bdevs_discovered": 2, 00:11:27.436 "num_base_bdevs_operational": 3, 00:11:27.436 "base_bdevs_list": [ 00:11:27.436 { 00:11:27.436 "name": "BaseBdev1", 00:11:27.436 "uuid": "fe24cd30-728f-4d3a-88dd-bbe0421496f8", 00:11:27.436 "is_configured": true, 00:11:27.436 "data_offset": 2048, 00:11:27.436 "data_size": 63488 00:11:27.436 }, 00:11:27.436 { 00:11:27.436 "name": "BaseBdev2", 00:11:27.436 "uuid": "fe6625b3-e4b4-4ad5-bf8a-373909ad4a53", 00:11:27.436 "is_configured": true, 00:11:27.436 "data_offset": 2048, 00:11:27.436 "data_size": 63488 00:11:27.436 }, 00:11:27.436 { 00:11:27.436 "name": "BaseBdev3", 00:11:27.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:27.436 "is_configured": false, 00:11:27.436 "data_offset": 0, 00:11:27.436 "data_size": 0 00:11:27.436 } 00:11:27.436 ] 00:11:27.436 }' 00:11:27.436 14:11:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:27.436 14:11:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.002 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:28.002 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.002 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.002 [2024-11-27 14:11:05.096405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:28.002 [2024-11-27 14:11:05.096704] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:28.002 [2024-11-27 14:11:05.096736] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:28.002 [2024-11-27 14:11:05.097101] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:28.002 BaseBdev3 00:11:28.002 [2024-11-27 14:11:05.097302] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:28.002 [2024-11-27 14:11:05.097318] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:28.002 [2024-11-27 14:11:05.097495] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:28.002 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.002 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:28.002 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:28.002 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:28.002 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:28.002 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:28.002 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:28.002 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:28.002 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.002 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.002 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.002 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:28.002 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.002 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.002 [ 00:11:28.002 { 00:11:28.002 "name": "BaseBdev3", 00:11:28.002 "aliases": [ 00:11:28.002 "833f28cf-84cd-4567-b23e-4e0d8d4be0d7" 00:11:28.002 ], 00:11:28.002 "product_name": "Malloc disk", 00:11:28.002 "block_size": 512, 00:11:28.002 "num_blocks": 65536, 00:11:28.002 "uuid": "833f28cf-84cd-4567-b23e-4e0d8d4be0d7", 00:11:28.002 "assigned_rate_limits": { 00:11:28.002 "rw_ios_per_sec": 0, 00:11:28.002 "rw_mbytes_per_sec": 0, 00:11:28.002 "r_mbytes_per_sec": 0, 00:11:28.002 "w_mbytes_per_sec": 0 00:11:28.002 }, 00:11:28.002 "claimed": true, 00:11:28.002 "claim_type": "exclusive_write", 00:11:28.002 "zoned": false, 00:11:28.002 "supported_io_types": { 00:11:28.002 "read": true, 00:11:28.002 "write": true, 00:11:28.002 "unmap": true, 00:11:28.002 "flush": true, 00:11:28.002 "reset": true, 00:11:28.002 "nvme_admin": false, 00:11:28.002 "nvme_io": false, 00:11:28.002 "nvme_io_md": false, 00:11:28.002 "write_zeroes": true, 00:11:28.002 "zcopy": true, 00:11:28.002 "get_zone_info": false, 00:11:28.002 "zone_management": false, 00:11:28.002 "zone_append": false, 00:11:28.002 "compare": false, 00:11:28.002 "compare_and_write": false, 00:11:28.002 "abort": true, 00:11:28.002 "seek_hole": false, 00:11:28.002 "seek_data": false, 00:11:28.002 "copy": true, 00:11:28.002 "nvme_iov_md": false 00:11:28.002 }, 00:11:28.002 "memory_domains": [ 00:11:28.002 { 00:11:28.002 "dma_device_id": "system", 00:11:28.002 "dma_device_type": 1 00:11:28.002 }, 00:11:28.002 { 00:11:28.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.002 "dma_device_type": 2 00:11:28.002 } 00:11:28.002 ], 00:11:28.002 "driver_specific": {} 00:11:28.002 } 00:11:28.002 ] 00:11:28.002 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.002 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:28.002 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:28.002 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:28.002 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:28.002 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.003 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.003 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.003 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.003 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:28.003 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.003 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.003 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.003 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.003 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.003 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.003 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.003 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.003 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.003 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:28.003 "name": "Existed_Raid", 00:11:28.003 "uuid": "6d573ef2-1e21-45c6-9209-37ceb566f583", 00:11:28.003 "strip_size_kb": 0, 00:11:28.003 "state": "online", 00:11:28.003 "raid_level": "raid1", 00:11:28.003 "superblock": true, 00:11:28.003 "num_base_bdevs": 3, 00:11:28.003 "num_base_bdevs_discovered": 3, 00:11:28.003 "num_base_bdevs_operational": 3, 00:11:28.003 "base_bdevs_list": [ 00:11:28.003 { 00:11:28.003 "name": "BaseBdev1", 00:11:28.003 "uuid": "fe24cd30-728f-4d3a-88dd-bbe0421496f8", 00:11:28.003 "is_configured": true, 00:11:28.003 "data_offset": 2048, 00:11:28.003 "data_size": 63488 00:11:28.003 }, 00:11:28.003 { 00:11:28.003 "name": "BaseBdev2", 00:11:28.003 "uuid": "fe6625b3-e4b4-4ad5-bf8a-373909ad4a53", 00:11:28.003 "is_configured": true, 00:11:28.003 "data_offset": 2048, 00:11:28.003 "data_size": 63488 00:11:28.003 }, 00:11:28.003 { 00:11:28.003 "name": "BaseBdev3", 00:11:28.003 "uuid": "833f28cf-84cd-4567-b23e-4e0d8d4be0d7", 00:11:28.003 "is_configured": true, 00:11:28.003 "data_offset": 2048, 00:11:28.003 "data_size": 63488 00:11:28.003 } 00:11:28.003 ] 00:11:28.003 }' 00:11:28.003 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:28.003 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.570 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:28.570 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:28.570 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:28.570 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:28.570 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:28.570 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:28.570 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:28.570 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:28.570 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.570 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.570 [2024-11-27 14:11:05.649041] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:28.570 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.570 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:28.570 "name": "Existed_Raid", 00:11:28.570 "aliases": [ 00:11:28.570 "6d573ef2-1e21-45c6-9209-37ceb566f583" 00:11:28.570 ], 00:11:28.570 "product_name": "Raid Volume", 00:11:28.570 "block_size": 512, 00:11:28.570 "num_blocks": 63488, 00:11:28.570 "uuid": "6d573ef2-1e21-45c6-9209-37ceb566f583", 00:11:28.570 "assigned_rate_limits": { 00:11:28.570 "rw_ios_per_sec": 0, 00:11:28.570 "rw_mbytes_per_sec": 0, 00:11:28.570 "r_mbytes_per_sec": 0, 00:11:28.570 "w_mbytes_per_sec": 0 00:11:28.570 }, 00:11:28.570 "claimed": false, 00:11:28.570 "zoned": false, 00:11:28.570 "supported_io_types": { 00:11:28.570 "read": true, 00:11:28.570 "write": true, 00:11:28.570 "unmap": false, 00:11:28.570 "flush": false, 00:11:28.570 "reset": true, 00:11:28.570 "nvme_admin": false, 00:11:28.570 "nvme_io": false, 00:11:28.570 "nvme_io_md": false, 00:11:28.570 "write_zeroes": true, 00:11:28.570 "zcopy": false, 00:11:28.570 "get_zone_info": false, 00:11:28.570 "zone_management": false, 00:11:28.570 "zone_append": false, 00:11:28.570 "compare": false, 00:11:28.570 "compare_and_write": false, 00:11:28.570 "abort": false, 00:11:28.570 "seek_hole": false, 00:11:28.570 "seek_data": false, 00:11:28.570 "copy": false, 00:11:28.570 "nvme_iov_md": false 00:11:28.570 }, 00:11:28.570 "memory_domains": [ 00:11:28.570 { 00:11:28.570 "dma_device_id": "system", 00:11:28.570 "dma_device_type": 1 00:11:28.570 }, 00:11:28.570 { 00:11:28.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.570 "dma_device_type": 2 00:11:28.570 }, 00:11:28.570 { 00:11:28.570 "dma_device_id": "system", 00:11:28.570 "dma_device_type": 1 00:11:28.570 }, 00:11:28.570 { 00:11:28.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.570 "dma_device_type": 2 00:11:28.570 }, 00:11:28.570 { 00:11:28.570 "dma_device_id": "system", 00:11:28.570 "dma_device_type": 1 00:11:28.570 }, 00:11:28.570 { 00:11:28.570 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:28.570 "dma_device_type": 2 00:11:28.570 } 00:11:28.570 ], 00:11:28.570 "driver_specific": { 00:11:28.570 "raid": { 00:11:28.570 "uuid": "6d573ef2-1e21-45c6-9209-37ceb566f583", 00:11:28.570 "strip_size_kb": 0, 00:11:28.570 "state": "online", 00:11:28.570 "raid_level": "raid1", 00:11:28.570 "superblock": true, 00:11:28.570 "num_base_bdevs": 3, 00:11:28.570 "num_base_bdevs_discovered": 3, 00:11:28.570 "num_base_bdevs_operational": 3, 00:11:28.570 "base_bdevs_list": [ 00:11:28.570 { 00:11:28.570 "name": "BaseBdev1", 00:11:28.570 "uuid": "fe24cd30-728f-4d3a-88dd-bbe0421496f8", 00:11:28.570 "is_configured": true, 00:11:28.570 "data_offset": 2048, 00:11:28.570 "data_size": 63488 00:11:28.570 }, 00:11:28.570 { 00:11:28.570 "name": "BaseBdev2", 00:11:28.570 "uuid": "fe6625b3-e4b4-4ad5-bf8a-373909ad4a53", 00:11:28.570 "is_configured": true, 00:11:28.570 "data_offset": 2048, 00:11:28.570 "data_size": 63488 00:11:28.570 }, 00:11:28.570 { 00:11:28.570 "name": "BaseBdev3", 00:11:28.570 "uuid": "833f28cf-84cd-4567-b23e-4e0d8d4be0d7", 00:11:28.570 "is_configured": true, 00:11:28.570 "data_offset": 2048, 00:11:28.570 "data_size": 63488 00:11:28.570 } 00:11:28.570 ] 00:11:28.570 } 00:11:28.570 } 00:11:28.570 }' 00:11:28.570 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:28.570 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:28.570 BaseBdev2 00:11:28.570 BaseBdev3' 00:11:28.570 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.570 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:28.570 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.570 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:28.570 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.570 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.570 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.570 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.829 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.829 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.829 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.829 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:28.829 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.829 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.829 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.829 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.829 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.829 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.829 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:28.829 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:28.829 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:28.829 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.829 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.829 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.829 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:28.829 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:28.829 14:11:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:28.829 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.829 14:11:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.829 [2024-11-27 14:11:05.972851] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:28.829 14:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.829 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:11:28.829 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:11:28.829 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:28.829 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:11:28.829 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:11:28.829 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:28.829 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:28.829 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:28.829 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:28.829 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:28.829 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:28.829 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:28.829 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:28.829 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:28.829 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:28.829 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:28.829 14:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.829 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:28.829 14:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:28.829 14:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.088 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:29.088 "name": "Existed_Raid", 00:11:29.088 "uuid": "6d573ef2-1e21-45c6-9209-37ceb566f583", 00:11:29.088 "strip_size_kb": 0, 00:11:29.088 "state": "online", 00:11:29.088 "raid_level": "raid1", 00:11:29.088 "superblock": true, 00:11:29.088 "num_base_bdevs": 3, 00:11:29.088 "num_base_bdevs_discovered": 2, 00:11:29.088 "num_base_bdevs_operational": 2, 00:11:29.088 "base_bdevs_list": [ 00:11:29.088 { 00:11:29.088 "name": null, 00:11:29.088 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:29.088 "is_configured": false, 00:11:29.088 "data_offset": 0, 00:11:29.088 "data_size": 63488 00:11:29.088 }, 00:11:29.088 { 00:11:29.088 "name": "BaseBdev2", 00:11:29.088 "uuid": "fe6625b3-e4b4-4ad5-bf8a-373909ad4a53", 00:11:29.088 "is_configured": true, 00:11:29.088 "data_offset": 2048, 00:11:29.088 "data_size": 63488 00:11:29.088 }, 00:11:29.088 { 00:11:29.088 "name": "BaseBdev3", 00:11:29.088 "uuid": "833f28cf-84cd-4567-b23e-4e0d8d4be0d7", 00:11:29.088 "is_configured": true, 00:11:29.088 "data_offset": 2048, 00:11:29.088 "data_size": 63488 00:11:29.088 } 00:11:29.088 ] 00:11:29.088 }' 00:11:29.088 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:29.088 14:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.346 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:11:29.346 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:29.346 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.346 14:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.346 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:29.346 14:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.346 14:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.604 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:29.604 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:29.604 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:11:29.604 14:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.604 14:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.604 [2024-11-27 14:11:06.660757] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:29.604 14:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.604 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:29.604 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:29.604 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.604 14:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.604 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:11:29.604 14:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.604 14:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.604 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:11:29.604 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:29.604 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:11:29.604 14:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.604 14:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.604 [2024-11-27 14:11:06.814241] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:29.604 [2024-11-27 14:11:06.814386] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:29.863 [2024-11-27 14:11:06.906114] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:29.863 [2024-11-27 14:11:06.906197] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:29.863 [2024-11-27 14:11:06.906219] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:11:29.863 14:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.863 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:11:29.863 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:11:29.863 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.863 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:11:29.863 14:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.863 14:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.863 14:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.863 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:11:29.863 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:11:29.863 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:11:29.863 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:11:29.863 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:29.863 14:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:29.863 14:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.863 14:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.863 BaseBdev2 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.863 [ 00:11:29.863 { 00:11:29.863 "name": "BaseBdev2", 00:11:29.863 "aliases": [ 00:11:29.863 "86a111ff-7f70-4104-906a-58577dc81ebf" 00:11:29.863 ], 00:11:29.863 "product_name": "Malloc disk", 00:11:29.863 "block_size": 512, 00:11:29.863 "num_blocks": 65536, 00:11:29.863 "uuid": "86a111ff-7f70-4104-906a-58577dc81ebf", 00:11:29.863 "assigned_rate_limits": { 00:11:29.863 "rw_ios_per_sec": 0, 00:11:29.863 "rw_mbytes_per_sec": 0, 00:11:29.863 "r_mbytes_per_sec": 0, 00:11:29.863 "w_mbytes_per_sec": 0 00:11:29.863 }, 00:11:29.863 "claimed": false, 00:11:29.863 "zoned": false, 00:11:29.863 "supported_io_types": { 00:11:29.863 "read": true, 00:11:29.863 "write": true, 00:11:29.863 "unmap": true, 00:11:29.863 "flush": true, 00:11:29.863 "reset": true, 00:11:29.863 "nvme_admin": false, 00:11:29.863 "nvme_io": false, 00:11:29.863 "nvme_io_md": false, 00:11:29.863 "write_zeroes": true, 00:11:29.863 "zcopy": true, 00:11:29.863 "get_zone_info": false, 00:11:29.863 "zone_management": false, 00:11:29.863 "zone_append": false, 00:11:29.863 "compare": false, 00:11:29.863 "compare_and_write": false, 00:11:29.863 "abort": true, 00:11:29.863 "seek_hole": false, 00:11:29.863 "seek_data": false, 00:11:29.863 "copy": true, 00:11:29.863 "nvme_iov_md": false 00:11:29.863 }, 00:11:29.863 "memory_domains": [ 00:11:29.863 { 00:11:29.863 "dma_device_id": "system", 00:11:29.863 "dma_device_type": 1 00:11:29.863 }, 00:11:29.863 { 00:11:29.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.863 "dma_device_type": 2 00:11:29.863 } 00:11:29.863 ], 00:11:29.863 "driver_specific": {} 00:11:29.863 } 00:11:29.863 ] 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.863 BaseBdev3 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.863 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.863 [ 00:11:29.863 { 00:11:29.863 "name": "BaseBdev3", 00:11:29.863 "aliases": [ 00:11:29.863 "3804e576-8666-4745-80cd-b88278756b0e" 00:11:29.863 ], 00:11:29.863 "product_name": "Malloc disk", 00:11:29.863 "block_size": 512, 00:11:29.863 "num_blocks": 65536, 00:11:29.863 "uuid": "3804e576-8666-4745-80cd-b88278756b0e", 00:11:29.863 "assigned_rate_limits": { 00:11:29.863 "rw_ios_per_sec": 0, 00:11:29.863 "rw_mbytes_per_sec": 0, 00:11:29.863 "r_mbytes_per_sec": 0, 00:11:29.863 "w_mbytes_per_sec": 0 00:11:29.863 }, 00:11:29.863 "claimed": false, 00:11:29.863 "zoned": false, 00:11:29.864 "supported_io_types": { 00:11:29.864 "read": true, 00:11:29.864 "write": true, 00:11:29.864 "unmap": true, 00:11:29.864 "flush": true, 00:11:29.864 "reset": true, 00:11:29.864 "nvme_admin": false, 00:11:29.864 "nvme_io": false, 00:11:29.864 "nvme_io_md": false, 00:11:29.864 "write_zeroes": true, 00:11:29.864 "zcopy": true, 00:11:29.864 "get_zone_info": false, 00:11:29.864 "zone_management": false, 00:11:29.864 "zone_append": false, 00:11:29.864 "compare": false, 00:11:29.864 "compare_and_write": false, 00:11:29.864 "abort": true, 00:11:29.864 "seek_hole": false, 00:11:29.864 "seek_data": false, 00:11:29.864 "copy": true, 00:11:29.864 "nvme_iov_md": false 00:11:29.864 }, 00:11:29.864 "memory_domains": [ 00:11:29.864 { 00:11:29.864 "dma_device_id": "system", 00:11:29.864 "dma_device_type": 1 00:11:29.864 }, 00:11:29.864 { 00:11:29.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.864 "dma_device_type": 2 00:11:29.864 } 00:11:29.864 ], 00:11:29.864 "driver_specific": {} 00:11:29.864 } 00:11:29.864 ] 00:11:29.864 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.864 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:29.864 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:11:29.864 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:11:29.864 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:11:29.864 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.864 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.864 [2024-11-27 14:11:07.104796] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:29.864 [2024-11-27 14:11:07.104858] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:29.864 [2024-11-27 14:11:07.104886] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:29.864 [2024-11-27 14:11:07.107405] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:29.864 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.864 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:29.864 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:29.864 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:29.864 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:29.864 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:29.864 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:29.864 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:29.864 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:29.864 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:29.864 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:29.864 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:29.864 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.864 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:29.864 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:29.864 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.122 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.122 "name": "Existed_Raid", 00:11:30.122 "uuid": "1bdd946f-dea1-4c40-80f4-343499ff878a", 00:11:30.122 "strip_size_kb": 0, 00:11:30.122 "state": "configuring", 00:11:30.122 "raid_level": "raid1", 00:11:30.122 "superblock": true, 00:11:30.122 "num_base_bdevs": 3, 00:11:30.122 "num_base_bdevs_discovered": 2, 00:11:30.122 "num_base_bdevs_operational": 3, 00:11:30.122 "base_bdevs_list": [ 00:11:30.122 { 00:11:30.122 "name": "BaseBdev1", 00:11:30.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.122 "is_configured": false, 00:11:30.122 "data_offset": 0, 00:11:30.122 "data_size": 0 00:11:30.122 }, 00:11:30.122 { 00:11:30.122 "name": "BaseBdev2", 00:11:30.122 "uuid": "86a111ff-7f70-4104-906a-58577dc81ebf", 00:11:30.122 "is_configured": true, 00:11:30.122 "data_offset": 2048, 00:11:30.122 "data_size": 63488 00:11:30.122 }, 00:11:30.122 { 00:11:30.122 "name": "BaseBdev3", 00:11:30.122 "uuid": "3804e576-8666-4745-80cd-b88278756b0e", 00:11:30.122 "is_configured": true, 00:11:30.122 "data_offset": 2048, 00:11:30.122 "data_size": 63488 00:11:30.122 } 00:11:30.122 ] 00:11:30.122 }' 00:11:30.122 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.122 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.688 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:11:30.688 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.688 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.688 [2024-11-27 14:11:07.672980] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:30.688 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.688 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:30.688 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:30.688 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:30.688 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:30.688 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:30.688 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:30.688 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:30.688 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:30.688 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:30.688 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:30.688 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.688 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:30.688 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.688 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.688 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.688 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:30.688 "name": "Existed_Raid", 00:11:30.688 "uuid": "1bdd946f-dea1-4c40-80f4-343499ff878a", 00:11:30.688 "strip_size_kb": 0, 00:11:30.688 "state": "configuring", 00:11:30.688 "raid_level": "raid1", 00:11:30.688 "superblock": true, 00:11:30.688 "num_base_bdevs": 3, 00:11:30.688 "num_base_bdevs_discovered": 1, 00:11:30.688 "num_base_bdevs_operational": 3, 00:11:30.688 "base_bdevs_list": [ 00:11:30.688 { 00:11:30.688 "name": "BaseBdev1", 00:11:30.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:30.688 "is_configured": false, 00:11:30.688 "data_offset": 0, 00:11:30.688 "data_size": 0 00:11:30.688 }, 00:11:30.688 { 00:11:30.688 "name": null, 00:11:30.688 "uuid": "86a111ff-7f70-4104-906a-58577dc81ebf", 00:11:30.688 "is_configured": false, 00:11:30.688 "data_offset": 0, 00:11:30.688 "data_size": 63488 00:11:30.688 }, 00:11:30.688 { 00:11:30.688 "name": "BaseBdev3", 00:11:30.688 "uuid": "3804e576-8666-4745-80cd-b88278756b0e", 00:11:30.688 "is_configured": true, 00:11:30.688 "data_offset": 2048, 00:11:30.688 "data_size": 63488 00:11:30.688 } 00:11:30.688 ] 00:11:30.688 }' 00:11:30.688 14:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:30.688 14:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.946 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:30.946 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:30.946 14:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.946 14:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:30.946 14:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.204 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:11:31.204 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:31.204 14:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.204 14:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.204 [2024-11-27 14:11:08.287219] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:31.204 BaseBdev1 00:11:31.204 14:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.204 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:11:31.204 14:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:31.204 14:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:31.204 14:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:31.204 14:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:31.204 14:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:31.204 14:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:31.204 14:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.204 14:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.204 14:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.204 14:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:31.204 14:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.204 14:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.204 [ 00:11:31.204 { 00:11:31.204 "name": "BaseBdev1", 00:11:31.204 "aliases": [ 00:11:31.204 "cb19723f-188b-4dc6-86d5-dbc308edaa52" 00:11:31.204 ], 00:11:31.204 "product_name": "Malloc disk", 00:11:31.204 "block_size": 512, 00:11:31.204 "num_blocks": 65536, 00:11:31.204 "uuid": "cb19723f-188b-4dc6-86d5-dbc308edaa52", 00:11:31.204 "assigned_rate_limits": { 00:11:31.204 "rw_ios_per_sec": 0, 00:11:31.204 "rw_mbytes_per_sec": 0, 00:11:31.204 "r_mbytes_per_sec": 0, 00:11:31.204 "w_mbytes_per_sec": 0 00:11:31.204 }, 00:11:31.204 "claimed": true, 00:11:31.205 "claim_type": "exclusive_write", 00:11:31.205 "zoned": false, 00:11:31.205 "supported_io_types": { 00:11:31.205 "read": true, 00:11:31.205 "write": true, 00:11:31.205 "unmap": true, 00:11:31.205 "flush": true, 00:11:31.205 "reset": true, 00:11:31.205 "nvme_admin": false, 00:11:31.205 "nvme_io": false, 00:11:31.205 "nvme_io_md": false, 00:11:31.205 "write_zeroes": true, 00:11:31.205 "zcopy": true, 00:11:31.205 "get_zone_info": false, 00:11:31.205 "zone_management": false, 00:11:31.205 "zone_append": false, 00:11:31.205 "compare": false, 00:11:31.205 "compare_and_write": false, 00:11:31.205 "abort": true, 00:11:31.205 "seek_hole": false, 00:11:31.205 "seek_data": false, 00:11:31.205 "copy": true, 00:11:31.205 "nvme_iov_md": false 00:11:31.205 }, 00:11:31.205 "memory_domains": [ 00:11:31.205 { 00:11:31.205 "dma_device_id": "system", 00:11:31.205 "dma_device_type": 1 00:11:31.205 }, 00:11:31.205 { 00:11:31.205 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:31.205 "dma_device_type": 2 00:11:31.205 } 00:11:31.205 ], 00:11:31.205 "driver_specific": {} 00:11:31.205 } 00:11:31.205 ] 00:11:31.205 14:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.205 14:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:31.205 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:31.205 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.205 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.205 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.205 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.205 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:31.205 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.205 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.205 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.205 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.205 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.205 14:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.205 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.205 14:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.205 14:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.205 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.205 "name": "Existed_Raid", 00:11:31.205 "uuid": "1bdd946f-dea1-4c40-80f4-343499ff878a", 00:11:31.205 "strip_size_kb": 0, 00:11:31.205 "state": "configuring", 00:11:31.205 "raid_level": "raid1", 00:11:31.205 "superblock": true, 00:11:31.205 "num_base_bdevs": 3, 00:11:31.205 "num_base_bdevs_discovered": 2, 00:11:31.205 "num_base_bdevs_operational": 3, 00:11:31.205 "base_bdevs_list": [ 00:11:31.205 { 00:11:31.205 "name": "BaseBdev1", 00:11:31.205 "uuid": "cb19723f-188b-4dc6-86d5-dbc308edaa52", 00:11:31.205 "is_configured": true, 00:11:31.205 "data_offset": 2048, 00:11:31.205 "data_size": 63488 00:11:31.205 }, 00:11:31.205 { 00:11:31.205 "name": null, 00:11:31.205 "uuid": "86a111ff-7f70-4104-906a-58577dc81ebf", 00:11:31.205 "is_configured": false, 00:11:31.205 "data_offset": 0, 00:11:31.205 "data_size": 63488 00:11:31.205 }, 00:11:31.205 { 00:11:31.205 "name": "BaseBdev3", 00:11:31.205 "uuid": "3804e576-8666-4745-80cd-b88278756b0e", 00:11:31.205 "is_configured": true, 00:11:31.205 "data_offset": 2048, 00:11:31.205 "data_size": 63488 00:11:31.205 } 00:11:31.205 ] 00:11:31.205 }' 00:11:31.205 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.205 14:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.819 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:31.819 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.819 14:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.819 14:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.819 14:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.819 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:11:31.819 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:11:31.819 14:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.819 14:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.819 [2024-11-27 14:11:08.887437] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:31.819 14:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.819 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:31.819 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:31.819 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:31.819 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:31.819 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:31.819 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:31.819 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:31.819 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:31.819 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:31.819 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:31.819 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:31.819 14:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.819 14:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:31.819 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:31.819 14:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.819 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:31.819 "name": "Existed_Raid", 00:11:31.819 "uuid": "1bdd946f-dea1-4c40-80f4-343499ff878a", 00:11:31.819 "strip_size_kb": 0, 00:11:31.819 "state": "configuring", 00:11:31.819 "raid_level": "raid1", 00:11:31.819 "superblock": true, 00:11:31.819 "num_base_bdevs": 3, 00:11:31.819 "num_base_bdevs_discovered": 1, 00:11:31.819 "num_base_bdevs_operational": 3, 00:11:31.819 "base_bdevs_list": [ 00:11:31.819 { 00:11:31.819 "name": "BaseBdev1", 00:11:31.819 "uuid": "cb19723f-188b-4dc6-86d5-dbc308edaa52", 00:11:31.819 "is_configured": true, 00:11:31.819 "data_offset": 2048, 00:11:31.819 "data_size": 63488 00:11:31.819 }, 00:11:31.819 { 00:11:31.819 "name": null, 00:11:31.820 "uuid": "86a111ff-7f70-4104-906a-58577dc81ebf", 00:11:31.820 "is_configured": false, 00:11:31.820 "data_offset": 0, 00:11:31.820 "data_size": 63488 00:11:31.820 }, 00:11:31.820 { 00:11:31.820 "name": null, 00:11:31.820 "uuid": "3804e576-8666-4745-80cd-b88278756b0e", 00:11:31.820 "is_configured": false, 00:11:31.820 "data_offset": 0, 00:11:31.820 "data_size": 63488 00:11:31.820 } 00:11:31.820 ] 00:11:31.820 }' 00:11:31.820 14:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:31.820 14:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.386 14:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.386 14:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:32.386 14:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.387 14:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.387 14:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.387 14:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:11:32.387 14:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:32.387 14:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.387 14:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.387 [2024-11-27 14:11:09.451725] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:32.387 14:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.387 14:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:32.387 14:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.387 14:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.387 14:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.387 14:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.387 14:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:32.387 14:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.387 14:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.387 14:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.387 14:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.387 14:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.387 14:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.387 14:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.387 14:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.387 14:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.387 14:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.387 "name": "Existed_Raid", 00:11:32.387 "uuid": "1bdd946f-dea1-4c40-80f4-343499ff878a", 00:11:32.387 "strip_size_kb": 0, 00:11:32.387 "state": "configuring", 00:11:32.387 "raid_level": "raid1", 00:11:32.387 "superblock": true, 00:11:32.387 "num_base_bdevs": 3, 00:11:32.387 "num_base_bdevs_discovered": 2, 00:11:32.387 "num_base_bdevs_operational": 3, 00:11:32.387 "base_bdevs_list": [ 00:11:32.387 { 00:11:32.387 "name": "BaseBdev1", 00:11:32.387 "uuid": "cb19723f-188b-4dc6-86d5-dbc308edaa52", 00:11:32.387 "is_configured": true, 00:11:32.387 "data_offset": 2048, 00:11:32.387 "data_size": 63488 00:11:32.387 }, 00:11:32.387 { 00:11:32.387 "name": null, 00:11:32.387 "uuid": "86a111ff-7f70-4104-906a-58577dc81ebf", 00:11:32.387 "is_configured": false, 00:11:32.387 "data_offset": 0, 00:11:32.387 "data_size": 63488 00:11:32.387 }, 00:11:32.387 { 00:11:32.387 "name": "BaseBdev3", 00:11:32.387 "uuid": "3804e576-8666-4745-80cd-b88278756b0e", 00:11:32.387 "is_configured": true, 00:11:32.387 "data_offset": 2048, 00:11:32.387 "data_size": 63488 00:11:32.387 } 00:11:32.387 ] 00:11:32.387 }' 00:11:32.387 14:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.387 14:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.954 14:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.954 14:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.954 14:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.954 14:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:32.954 14:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.954 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:11:32.954 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:32.954 14:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.954 14:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.954 [2024-11-27 14:11:10.044002] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:32.954 14:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.954 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:32.954 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:32.954 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:32.954 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:32.954 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:32.954 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:32.954 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:32.954 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:32.954 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:32.954 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:32.954 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:32.954 14:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:32.954 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:32.954 14:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:32.954 14:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:32.954 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:32.954 "name": "Existed_Raid", 00:11:32.954 "uuid": "1bdd946f-dea1-4c40-80f4-343499ff878a", 00:11:32.954 "strip_size_kb": 0, 00:11:32.954 "state": "configuring", 00:11:32.954 "raid_level": "raid1", 00:11:32.954 "superblock": true, 00:11:32.954 "num_base_bdevs": 3, 00:11:32.954 "num_base_bdevs_discovered": 1, 00:11:32.954 "num_base_bdevs_operational": 3, 00:11:32.954 "base_bdevs_list": [ 00:11:32.954 { 00:11:32.954 "name": null, 00:11:32.954 "uuid": "cb19723f-188b-4dc6-86d5-dbc308edaa52", 00:11:32.954 "is_configured": false, 00:11:32.954 "data_offset": 0, 00:11:32.954 "data_size": 63488 00:11:32.954 }, 00:11:32.954 { 00:11:32.954 "name": null, 00:11:32.954 "uuid": "86a111ff-7f70-4104-906a-58577dc81ebf", 00:11:32.954 "is_configured": false, 00:11:32.954 "data_offset": 0, 00:11:32.954 "data_size": 63488 00:11:32.954 }, 00:11:32.954 { 00:11:32.954 "name": "BaseBdev3", 00:11:32.954 "uuid": "3804e576-8666-4745-80cd-b88278756b0e", 00:11:32.954 "is_configured": true, 00:11:32.954 "data_offset": 2048, 00:11:32.954 "data_size": 63488 00:11:32.954 } 00:11:32.954 ] 00:11:32.954 }' 00:11:32.954 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:32.954 14:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.520 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.520 14:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.520 14:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.520 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:33.520 14:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.520 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:11:33.520 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:33.520 14:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.520 14:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.520 [2024-11-27 14:11:10.768331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:33.520 14:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.520 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:33.520 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:33.520 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:33.520 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:33.520 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:33.520 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:33.520 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:33.520 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:33.520 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:33.520 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:33.520 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:33.520 14:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:33.520 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:33.520 14:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:33.520 14:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:33.777 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:33.777 "name": "Existed_Raid", 00:11:33.777 "uuid": "1bdd946f-dea1-4c40-80f4-343499ff878a", 00:11:33.777 "strip_size_kb": 0, 00:11:33.777 "state": "configuring", 00:11:33.777 "raid_level": "raid1", 00:11:33.777 "superblock": true, 00:11:33.777 "num_base_bdevs": 3, 00:11:33.777 "num_base_bdevs_discovered": 2, 00:11:33.777 "num_base_bdevs_operational": 3, 00:11:33.777 "base_bdevs_list": [ 00:11:33.777 { 00:11:33.777 "name": null, 00:11:33.777 "uuid": "cb19723f-188b-4dc6-86d5-dbc308edaa52", 00:11:33.777 "is_configured": false, 00:11:33.777 "data_offset": 0, 00:11:33.777 "data_size": 63488 00:11:33.777 }, 00:11:33.777 { 00:11:33.777 "name": "BaseBdev2", 00:11:33.777 "uuid": "86a111ff-7f70-4104-906a-58577dc81ebf", 00:11:33.777 "is_configured": true, 00:11:33.777 "data_offset": 2048, 00:11:33.777 "data_size": 63488 00:11:33.777 }, 00:11:33.777 { 00:11:33.777 "name": "BaseBdev3", 00:11:33.777 "uuid": "3804e576-8666-4745-80cd-b88278756b0e", 00:11:33.777 "is_configured": true, 00:11:33.777 "data_offset": 2048, 00:11:33.777 "data_size": 63488 00:11:33.777 } 00:11:33.777 ] 00:11:33.777 }' 00:11:33.777 14:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:33.777 14:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.034 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.034 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.034 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.034 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:34.034 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.293 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:11:34.293 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.293 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.293 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:34.293 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.293 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.293 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u cb19723f-188b-4dc6-86d5-dbc308edaa52 00:11:34.293 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.293 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.293 [2024-11-27 14:11:11.419096] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:34.293 [2024-11-27 14:11:11.419465] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:34.293 [2024-11-27 14:11:11.419482] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:34.293 NewBaseBdev 00:11:34.293 [2024-11-27 14:11:11.419832] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:11:34.293 [2024-11-27 14:11:11.420021] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:34.293 [2024-11-27 14:11:11.420051] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:11:34.293 [2024-11-27 14:11:11.420214] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:34.293 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.293 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:11:34.293 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:11:34.293 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:34.293 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:11:34.293 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:34.293 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:34.293 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:34.293 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.293 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.293 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.293 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:34.293 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.293 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.293 [ 00:11:34.293 { 00:11:34.293 "name": "NewBaseBdev", 00:11:34.293 "aliases": [ 00:11:34.293 "cb19723f-188b-4dc6-86d5-dbc308edaa52" 00:11:34.293 ], 00:11:34.293 "product_name": "Malloc disk", 00:11:34.293 "block_size": 512, 00:11:34.293 "num_blocks": 65536, 00:11:34.293 "uuid": "cb19723f-188b-4dc6-86d5-dbc308edaa52", 00:11:34.293 "assigned_rate_limits": { 00:11:34.293 "rw_ios_per_sec": 0, 00:11:34.293 "rw_mbytes_per_sec": 0, 00:11:34.293 "r_mbytes_per_sec": 0, 00:11:34.293 "w_mbytes_per_sec": 0 00:11:34.293 }, 00:11:34.293 "claimed": true, 00:11:34.293 "claim_type": "exclusive_write", 00:11:34.293 "zoned": false, 00:11:34.293 "supported_io_types": { 00:11:34.293 "read": true, 00:11:34.294 "write": true, 00:11:34.294 "unmap": true, 00:11:34.294 "flush": true, 00:11:34.294 "reset": true, 00:11:34.294 "nvme_admin": false, 00:11:34.294 "nvme_io": false, 00:11:34.294 "nvme_io_md": false, 00:11:34.294 "write_zeroes": true, 00:11:34.294 "zcopy": true, 00:11:34.294 "get_zone_info": false, 00:11:34.294 "zone_management": false, 00:11:34.294 "zone_append": false, 00:11:34.294 "compare": false, 00:11:34.294 "compare_and_write": false, 00:11:34.294 "abort": true, 00:11:34.294 "seek_hole": false, 00:11:34.294 "seek_data": false, 00:11:34.294 "copy": true, 00:11:34.294 "nvme_iov_md": false 00:11:34.294 }, 00:11:34.294 "memory_domains": [ 00:11:34.294 { 00:11:34.294 "dma_device_id": "system", 00:11:34.294 "dma_device_type": 1 00:11:34.294 }, 00:11:34.294 { 00:11:34.294 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.294 "dma_device_type": 2 00:11:34.294 } 00:11:34.294 ], 00:11:34.294 "driver_specific": {} 00:11:34.294 } 00:11:34.294 ] 00:11:34.294 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.294 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:11:34.294 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:34.294 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:34.294 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:34.294 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:34.294 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:34.294 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:34.294 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:34.294 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:34.294 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:34.294 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:34.294 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:34.294 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.294 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.294 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.294 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.294 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:34.294 "name": "Existed_Raid", 00:11:34.294 "uuid": "1bdd946f-dea1-4c40-80f4-343499ff878a", 00:11:34.294 "strip_size_kb": 0, 00:11:34.294 "state": "online", 00:11:34.294 "raid_level": "raid1", 00:11:34.294 "superblock": true, 00:11:34.294 "num_base_bdevs": 3, 00:11:34.294 "num_base_bdevs_discovered": 3, 00:11:34.294 "num_base_bdevs_operational": 3, 00:11:34.294 "base_bdevs_list": [ 00:11:34.294 { 00:11:34.294 "name": "NewBaseBdev", 00:11:34.294 "uuid": "cb19723f-188b-4dc6-86d5-dbc308edaa52", 00:11:34.294 "is_configured": true, 00:11:34.294 "data_offset": 2048, 00:11:34.294 "data_size": 63488 00:11:34.294 }, 00:11:34.294 { 00:11:34.294 "name": "BaseBdev2", 00:11:34.294 "uuid": "86a111ff-7f70-4104-906a-58577dc81ebf", 00:11:34.294 "is_configured": true, 00:11:34.294 "data_offset": 2048, 00:11:34.294 "data_size": 63488 00:11:34.294 }, 00:11:34.294 { 00:11:34.294 "name": "BaseBdev3", 00:11:34.294 "uuid": "3804e576-8666-4745-80cd-b88278756b0e", 00:11:34.294 "is_configured": true, 00:11:34.294 "data_offset": 2048, 00:11:34.294 "data_size": 63488 00:11:34.294 } 00:11:34.294 ] 00:11:34.294 }' 00:11:34.294 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:34.294 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.862 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:11:34.862 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:34.862 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:34.862 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:34.862 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:11:34.862 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:34.862 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:34.862 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.862 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.862 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:34.862 [2024-11-27 14:11:11.919798] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:34.862 14:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.862 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:34.862 "name": "Existed_Raid", 00:11:34.862 "aliases": [ 00:11:34.862 "1bdd946f-dea1-4c40-80f4-343499ff878a" 00:11:34.862 ], 00:11:34.862 "product_name": "Raid Volume", 00:11:34.862 "block_size": 512, 00:11:34.862 "num_blocks": 63488, 00:11:34.862 "uuid": "1bdd946f-dea1-4c40-80f4-343499ff878a", 00:11:34.862 "assigned_rate_limits": { 00:11:34.862 "rw_ios_per_sec": 0, 00:11:34.862 "rw_mbytes_per_sec": 0, 00:11:34.862 "r_mbytes_per_sec": 0, 00:11:34.862 "w_mbytes_per_sec": 0 00:11:34.862 }, 00:11:34.862 "claimed": false, 00:11:34.862 "zoned": false, 00:11:34.862 "supported_io_types": { 00:11:34.862 "read": true, 00:11:34.862 "write": true, 00:11:34.862 "unmap": false, 00:11:34.862 "flush": false, 00:11:34.862 "reset": true, 00:11:34.862 "nvme_admin": false, 00:11:34.862 "nvme_io": false, 00:11:34.862 "nvme_io_md": false, 00:11:34.862 "write_zeroes": true, 00:11:34.862 "zcopy": false, 00:11:34.862 "get_zone_info": false, 00:11:34.862 "zone_management": false, 00:11:34.862 "zone_append": false, 00:11:34.862 "compare": false, 00:11:34.862 "compare_and_write": false, 00:11:34.862 "abort": false, 00:11:34.862 "seek_hole": false, 00:11:34.862 "seek_data": false, 00:11:34.862 "copy": false, 00:11:34.862 "nvme_iov_md": false 00:11:34.862 }, 00:11:34.862 "memory_domains": [ 00:11:34.862 { 00:11:34.862 "dma_device_id": "system", 00:11:34.862 "dma_device_type": 1 00:11:34.862 }, 00:11:34.862 { 00:11:34.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.862 "dma_device_type": 2 00:11:34.862 }, 00:11:34.862 { 00:11:34.862 "dma_device_id": "system", 00:11:34.862 "dma_device_type": 1 00:11:34.862 }, 00:11:34.862 { 00:11:34.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.862 "dma_device_type": 2 00:11:34.862 }, 00:11:34.862 { 00:11:34.862 "dma_device_id": "system", 00:11:34.862 "dma_device_type": 1 00:11:34.862 }, 00:11:34.862 { 00:11:34.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:34.862 "dma_device_type": 2 00:11:34.862 } 00:11:34.862 ], 00:11:34.862 "driver_specific": { 00:11:34.862 "raid": { 00:11:34.862 "uuid": "1bdd946f-dea1-4c40-80f4-343499ff878a", 00:11:34.862 "strip_size_kb": 0, 00:11:34.862 "state": "online", 00:11:34.862 "raid_level": "raid1", 00:11:34.862 "superblock": true, 00:11:34.862 "num_base_bdevs": 3, 00:11:34.862 "num_base_bdevs_discovered": 3, 00:11:34.862 "num_base_bdevs_operational": 3, 00:11:34.862 "base_bdevs_list": [ 00:11:34.862 { 00:11:34.862 "name": "NewBaseBdev", 00:11:34.862 "uuid": "cb19723f-188b-4dc6-86d5-dbc308edaa52", 00:11:34.862 "is_configured": true, 00:11:34.862 "data_offset": 2048, 00:11:34.862 "data_size": 63488 00:11:34.862 }, 00:11:34.862 { 00:11:34.862 "name": "BaseBdev2", 00:11:34.862 "uuid": "86a111ff-7f70-4104-906a-58577dc81ebf", 00:11:34.862 "is_configured": true, 00:11:34.862 "data_offset": 2048, 00:11:34.863 "data_size": 63488 00:11:34.863 }, 00:11:34.863 { 00:11:34.863 "name": "BaseBdev3", 00:11:34.863 "uuid": "3804e576-8666-4745-80cd-b88278756b0e", 00:11:34.863 "is_configured": true, 00:11:34.863 "data_offset": 2048, 00:11:34.863 "data_size": 63488 00:11:34.863 } 00:11:34.863 ] 00:11:34.863 } 00:11:34.863 } 00:11:34.863 }' 00:11:34.863 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:34.863 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:11:34.863 BaseBdev2 00:11:34.863 BaseBdev3' 00:11:34.863 14:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.863 14:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:34.863 14:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.863 14:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:11:34.863 14:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.863 14:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.863 14:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.863 14:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.863 14:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:34.863 14:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:34.863 14:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:34.863 14:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:34.863 14:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:34.863 14:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.863 14:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:34.863 14:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.122 14:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.122 14:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.122 14:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:35.122 14:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:35.122 14:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:35.122 14:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.122 14:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.122 14:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.122 14:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:35.122 14:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:35.122 14:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:35.122 14:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:35.122 14:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:35.122 [2024-11-27 14:11:12.191532] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:35.122 [2024-11-27 14:11:12.191576] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:35.122 [2024-11-27 14:11:12.191683] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:35.122 [2024-11-27 14:11:12.192080] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:35.122 [2024-11-27 14:11:12.192100] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:11:35.122 14:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:35.122 14:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 67980 00:11:35.122 14:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 67980 ']' 00:11:35.122 14:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 67980 00:11:35.122 14:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:11:35.122 14:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:35.122 14:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67980 00:11:35.122 14:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:35.122 14:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:35.122 killing process with pid 67980 00:11:35.122 14:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67980' 00:11:35.122 14:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 67980 00:11:35.122 [2024-11-27 14:11:12.229902] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:35.122 14:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 67980 00:11:35.381 [2024-11-27 14:11:12.496677] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:36.319 14:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:11:36.319 00:11:36.319 real 0m11.873s 00:11:36.319 user 0m19.753s 00:11:36.319 sys 0m1.604s 00:11:36.319 14:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:36.319 14:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:36.319 ************************************ 00:11:36.319 END TEST raid_state_function_test_sb 00:11:36.319 ************************************ 00:11:36.319 14:11:13 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:11:36.319 14:11:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:36.319 14:11:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:36.319 14:11:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:36.319 ************************************ 00:11:36.319 START TEST raid_superblock_test 00:11:36.319 ************************************ 00:11:36.319 14:11:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 3 00:11:36.319 14:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:11:36.319 14:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:11:36.320 14:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:11:36.320 14:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:11:36.320 14:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:11:36.320 14:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:11:36.320 14:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:11:36.320 14:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:11:36.320 14:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:11:36.320 14:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:11:36.320 14:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:11:36.320 14:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:11:36.320 14:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:11:36.320 14:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:11:36.320 14:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:11:36.320 14:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=68617 00:11:36.320 14:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 68617 00:11:36.320 14:11:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:11:36.320 14:11:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 68617 ']' 00:11:36.320 14:11:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:36.320 14:11:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:36.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:36.320 14:11:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:36.320 14:11:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:36.320 14:11:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.579 [2024-11-27 14:11:13.690429] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:11:36.579 [2024-11-27 14:11:13.690627] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid68617 ] 00:11:36.838 [2024-11-27 14:11:13.877949] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:36.838 [2024-11-27 14:11:14.032666] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:37.099 [2024-11-27 14:11:14.240883] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:37.099 [2024-11-27 14:11:14.240958] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:37.666 14:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:37.666 14:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:11:37.666 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:11:37.666 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:37.666 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:11:37.666 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:11:37.666 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:37.666 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:37.666 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:37.666 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:37.666 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:11:37.666 14:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.666 14:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.666 malloc1 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.667 [2024-11-27 14:11:14.752564] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:37.667 [2024-11-27 14:11:14.752643] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.667 [2024-11-27 14:11:14.752682] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:11:37.667 [2024-11-27 14:11:14.752696] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.667 [2024-11-27 14:11:14.755623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.667 [2024-11-27 14:11:14.755838] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:37.667 pt1 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.667 malloc2 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.667 [2024-11-27 14:11:14.809996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:37.667 [2024-11-27 14:11:14.810069] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.667 [2024-11-27 14:11:14.810114] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:11:37.667 [2024-11-27 14:11:14.810130] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.667 [2024-11-27 14:11:14.813145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.667 [2024-11-27 14:11:14.813368] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:37.667 pt2 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.667 malloc3 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.667 [2024-11-27 14:11:14.878731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:37.667 [2024-11-27 14:11:14.878939] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:37.667 [2024-11-27 14:11:14.879019] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:11:37.667 [2024-11-27 14:11:14.879137] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:37.667 [2024-11-27 14:11:14.882030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:37.667 [2024-11-27 14:11:14.882220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:37.667 pt3 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.667 [2024-11-27 14:11:14.891012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:37.667 [2024-11-27 14:11:14.893524] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:37.667 [2024-11-27 14:11:14.893663] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:37.667 [2024-11-27 14:11:14.893925] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:11:37.667 [2024-11-27 14:11:14.893954] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:37.667 [2024-11-27 14:11:14.894263] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:11:37.667 [2024-11-27 14:11:14.894639] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:11:37.667 [2024-11-27 14:11:14.894667] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:11:37.667 [2024-11-27 14:11:14.894914] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.667 14:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:37.927 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:37.927 "name": "raid_bdev1", 00:11:37.927 "uuid": "81e3722c-a460-47de-99ed-8634c7025d9e", 00:11:37.927 "strip_size_kb": 0, 00:11:37.927 "state": "online", 00:11:37.927 "raid_level": "raid1", 00:11:37.927 "superblock": true, 00:11:37.927 "num_base_bdevs": 3, 00:11:37.927 "num_base_bdevs_discovered": 3, 00:11:37.927 "num_base_bdevs_operational": 3, 00:11:37.927 "base_bdevs_list": [ 00:11:37.927 { 00:11:37.927 "name": "pt1", 00:11:37.927 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:37.927 "is_configured": true, 00:11:37.927 "data_offset": 2048, 00:11:37.927 "data_size": 63488 00:11:37.927 }, 00:11:37.927 { 00:11:37.927 "name": "pt2", 00:11:37.927 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:37.927 "is_configured": true, 00:11:37.927 "data_offset": 2048, 00:11:37.927 "data_size": 63488 00:11:37.927 }, 00:11:37.927 { 00:11:37.927 "name": "pt3", 00:11:37.927 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:37.927 "is_configured": true, 00:11:37.927 "data_offset": 2048, 00:11:37.927 "data_size": 63488 00:11:37.927 } 00:11:37.927 ] 00:11:37.927 }' 00:11:37.927 14:11:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:37.927 14:11:14 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.186 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:11:38.186 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:38.186 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:38.186 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:38.186 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:38.186 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:38.186 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:38.186 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:38.186 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.186 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.186 [2024-11-27 14:11:15.435561] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:38.186 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.445 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:38.445 "name": "raid_bdev1", 00:11:38.445 "aliases": [ 00:11:38.445 "81e3722c-a460-47de-99ed-8634c7025d9e" 00:11:38.445 ], 00:11:38.445 "product_name": "Raid Volume", 00:11:38.445 "block_size": 512, 00:11:38.445 "num_blocks": 63488, 00:11:38.445 "uuid": "81e3722c-a460-47de-99ed-8634c7025d9e", 00:11:38.445 "assigned_rate_limits": { 00:11:38.445 "rw_ios_per_sec": 0, 00:11:38.445 "rw_mbytes_per_sec": 0, 00:11:38.445 "r_mbytes_per_sec": 0, 00:11:38.445 "w_mbytes_per_sec": 0 00:11:38.445 }, 00:11:38.445 "claimed": false, 00:11:38.445 "zoned": false, 00:11:38.445 "supported_io_types": { 00:11:38.445 "read": true, 00:11:38.445 "write": true, 00:11:38.445 "unmap": false, 00:11:38.445 "flush": false, 00:11:38.445 "reset": true, 00:11:38.445 "nvme_admin": false, 00:11:38.445 "nvme_io": false, 00:11:38.445 "nvme_io_md": false, 00:11:38.445 "write_zeroes": true, 00:11:38.445 "zcopy": false, 00:11:38.445 "get_zone_info": false, 00:11:38.445 "zone_management": false, 00:11:38.445 "zone_append": false, 00:11:38.445 "compare": false, 00:11:38.445 "compare_and_write": false, 00:11:38.445 "abort": false, 00:11:38.445 "seek_hole": false, 00:11:38.445 "seek_data": false, 00:11:38.445 "copy": false, 00:11:38.445 "nvme_iov_md": false 00:11:38.445 }, 00:11:38.445 "memory_domains": [ 00:11:38.445 { 00:11:38.445 "dma_device_id": "system", 00:11:38.445 "dma_device_type": 1 00:11:38.445 }, 00:11:38.445 { 00:11:38.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.445 "dma_device_type": 2 00:11:38.445 }, 00:11:38.445 { 00:11:38.445 "dma_device_id": "system", 00:11:38.445 "dma_device_type": 1 00:11:38.445 }, 00:11:38.445 { 00:11:38.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.445 "dma_device_type": 2 00:11:38.445 }, 00:11:38.445 { 00:11:38.445 "dma_device_id": "system", 00:11:38.445 "dma_device_type": 1 00:11:38.445 }, 00:11:38.445 { 00:11:38.445 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.445 "dma_device_type": 2 00:11:38.445 } 00:11:38.446 ], 00:11:38.446 "driver_specific": { 00:11:38.446 "raid": { 00:11:38.446 "uuid": "81e3722c-a460-47de-99ed-8634c7025d9e", 00:11:38.446 "strip_size_kb": 0, 00:11:38.446 "state": "online", 00:11:38.446 "raid_level": "raid1", 00:11:38.446 "superblock": true, 00:11:38.446 "num_base_bdevs": 3, 00:11:38.446 "num_base_bdevs_discovered": 3, 00:11:38.446 "num_base_bdevs_operational": 3, 00:11:38.446 "base_bdevs_list": [ 00:11:38.446 { 00:11:38.446 "name": "pt1", 00:11:38.446 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:38.446 "is_configured": true, 00:11:38.446 "data_offset": 2048, 00:11:38.446 "data_size": 63488 00:11:38.446 }, 00:11:38.446 { 00:11:38.446 "name": "pt2", 00:11:38.446 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:38.446 "is_configured": true, 00:11:38.446 "data_offset": 2048, 00:11:38.446 "data_size": 63488 00:11:38.446 }, 00:11:38.446 { 00:11:38.446 "name": "pt3", 00:11:38.446 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:38.446 "is_configured": true, 00:11:38.446 "data_offset": 2048, 00:11:38.446 "data_size": 63488 00:11:38.446 } 00:11:38.446 ] 00:11:38.446 } 00:11:38.446 } 00:11:38.446 }' 00:11:38.446 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:38.446 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:38.446 pt2 00:11:38.446 pt3' 00:11:38.446 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.446 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:38.446 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.446 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:38.446 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.446 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.446 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.446 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.446 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.446 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.446 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.446 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.446 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:38.446 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.446 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.446 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.446 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.446 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.446 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:38.446 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:38.446 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:38.446 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.446 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.446 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:11:38.706 [2024-11-27 14:11:15.763655] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=81e3722c-a460-47de-99ed-8634c7025d9e 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 81e3722c-a460-47de-99ed-8634c7025d9e ']' 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.706 [2024-11-27 14:11:15.815374] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:38.706 [2024-11-27 14:11:15.815588] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:38.706 [2024-11-27 14:11:15.815711] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:38.706 [2024-11-27 14:11:15.815828] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:38.706 [2024-11-27 14:11:15.815845] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.706 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.706 [2024-11-27 14:11:15.975484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:38.706 [2024-11-27 14:11:15.978141] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:38.706 [2024-11-27 14:11:15.978261] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:38.706 [2024-11-27 14:11:15.978331] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:38.706 [2024-11-27 14:11:15.978416] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:38.706 [2024-11-27 14:11:15.978463] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:38.706 [2024-11-27 14:11:15.978489] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:38.706 [2024-11-27 14:11:15.978502] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:11:38.706 request: 00:11:38.706 { 00:11:38.706 "name": "raid_bdev1", 00:11:38.965 "raid_level": "raid1", 00:11:38.965 "base_bdevs": [ 00:11:38.965 "malloc1", 00:11:38.965 "malloc2", 00:11:38.965 "malloc3" 00:11:38.965 ], 00:11:38.965 "superblock": false, 00:11:38.965 "method": "bdev_raid_create", 00:11:38.965 "req_id": 1 00:11:38.965 } 00:11:38.965 Got JSON-RPC error response 00:11:38.965 response: 00:11:38.965 { 00:11:38.965 "code": -17, 00:11:38.965 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:38.965 } 00:11:38.965 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:38.965 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:11:38.965 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:38.965 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:38.965 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:38.965 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.965 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.965 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.965 14:11:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:11:38.965 14:11:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.965 14:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:11:38.965 14:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:11:38.965 14:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:38.965 14:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.965 14:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.965 [2024-11-27 14:11:16.047529] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:38.965 [2024-11-27 14:11:16.047762] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:38.965 [2024-11-27 14:11:16.047821] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:11:38.965 [2024-11-27 14:11:16.047840] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:38.965 [2024-11-27 14:11:16.050861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:38.965 [2024-11-27 14:11:16.050905] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:38.965 [2024-11-27 14:11:16.051016] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:38.965 [2024-11-27 14:11:16.051083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:38.965 pt1 00:11:38.965 14:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.965 14:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:38.965 14:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:38.965 14:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:38.965 14:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:38.965 14:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:38.965 14:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:38.965 14:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:38.965 14:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:38.965 14:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:38.965 14:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:38.965 14:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:38.965 14:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:38.965 14:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.965 14:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.965 14:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.965 14:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:38.965 "name": "raid_bdev1", 00:11:38.965 "uuid": "81e3722c-a460-47de-99ed-8634c7025d9e", 00:11:38.965 "strip_size_kb": 0, 00:11:38.966 "state": "configuring", 00:11:38.966 "raid_level": "raid1", 00:11:38.966 "superblock": true, 00:11:38.966 "num_base_bdevs": 3, 00:11:38.966 "num_base_bdevs_discovered": 1, 00:11:38.966 "num_base_bdevs_operational": 3, 00:11:38.966 "base_bdevs_list": [ 00:11:38.966 { 00:11:38.966 "name": "pt1", 00:11:38.966 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:38.966 "is_configured": true, 00:11:38.966 "data_offset": 2048, 00:11:38.966 "data_size": 63488 00:11:38.966 }, 00:11:38.966 { 00:11:38.966 "name": null, 00:11:38.966 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:38.966 "is_configured": false, 00:11:38.966 "data_offset": 2048, 00:11:38.966 "data_size": 63488 00:11:38.966 }, 00:11:38.966 { 00:11:38.966 "name": null, 00:11:38.966 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:38.966 "is_configured": false, 00:11:38.966 "data_offset": 2048, 00:11:38.966 "data_size": 63488 00:11:38.966 } 00:11:38.966 ] 00:11:38.966 }' 00:11:38.966 14:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:38.966 14:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.534 14:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:11:39.534 14:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:39.534 14:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.534 14:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.534 [2024-11-27 14:11:16.591680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:39.534 [2024-11-27 14:11:16.591766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:39.534 [2024-11-27 14:11:16.591830] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:11:39.534 [2024-11-27 14:11:16.591848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:39.534 [2024-11-27 14:11:16.592416] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:39.534 [2024-11-27 14:11:16.592443] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:39.534 [2024-11-27 14:11:16.592562] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:39.534 [2024-11-27 14:11:16.592609] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:39.534 pt2 00:11:39.534 14:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.534 14:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:11:39.534 14:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.534 14:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.534 [2024-11-27 14:11:16.599655] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:39.534 14:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.534 14:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:11:39.534 14:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:39.534 14:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:39.534 14:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:39.534 14:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:39.534 14:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:39.534 14:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:39.534 14:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:39.534 14:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:39.534 14:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:39.534 14:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:39.534 14:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:39.534 14:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:39.534 14:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:39.534 14:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:39.534 14:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:39.534 "name": "raid_bdev1", 00:11:39.534 "uuid": "81e3722c-a460-47de-99ed-8634c7025d9e", 00:11:39.534 "strip_size_kb": 0, 00:11:39.534 "state": "configuring", 00:11:39.534 "raid_level": "raid1", 00:11:39.534 "superblock": true, 00:11:39.534 "num_base_bdevs": 3, 00:11:39.534 "num_base_bdevs_discovered": 1, 00:11:39.534 "num_base_bdevs_operational": 3, 00:11:39.534 "base_bdevs_list": [ 00:11:39.534 { 00:11:39.534 "name": "pt1", 00:11:39.534 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:39.534 "is_configured": true, 00:11:39.534 "data_offset": 2048, 00:11:39.534 "data_size": 63488 00:11:39.534 }, 00:11:39.534 { 00:11:39.534 "name": null, 00:11:39.534 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:39.534 "is_configured": false, 00:11:39.534 "data_offset": 0, 00:11:39.534 "data_size": 63488 00:11:39.534 }, 00:11:39.534 { 00:11:39.534 "name": null, 00:11:39.534 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:39.534 "is_configured": false, 00:11:39.534 "data_offset": 2048, 00:11:39.534 "data_size": 63488 00:11:39.534 } 00:11:39.534 ] 00:11:39.534 }' 00:11:39.534 14:11:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:39.534 14:11:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.103 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:11:40.103 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:40.103 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:40.103 14:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.103 14:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.103 [2024-11-27 14:11:17.147914] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:40.103 [2024-11-27 14:11:17.148003] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.103 [2024-11-27 14:11:17.148034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:11:40.103 [2024-11-27 14:11:17.148051] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.103 [2024-11-27 14:11:17.148620] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.103 [2024-11-27 14:11:17.148654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:40.103 [2024-11-27 14:11:17.148758] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:40.103 [2024-11-27 14:11:17.148824] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:40.103 pt2 00:11:40.103 14:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.103 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:40.103 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:40.103 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:40.103 14:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.103 14:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.103 [2024-11-27 14:11:17.159868] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:40.103 [2024-11-27 14:11:17.159927] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:40.103 [2024-11-27 14:11:17.159949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:11:40.103 [2024-11-27 14:11:17.159964] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:40.103 [2024-11-27 14:11:17.160440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:40.103 [2024-11-27 14:11:17.160479] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:40.103 [2024-11-27 14:11:17.160559] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:40.103 [2024-11-27 14:11:17.160591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:40.103 [2024-11-27 14:11:17.160749] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:40.103 [2024-11-27 14:11:17.160795] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:40.103 [2024-11-27 14:11:17.161096] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:40.103 [2024-11-27 14:11:17.161298] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:40.103 [2024-11-27 14:11:17.161320] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:11:40.103 [2024-11-27 14:11:17.161492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:40.103 pt3 00:11:40.103 14:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.103 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:11:40.103 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:11:40.103 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:40.103 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:40.103 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:40.103 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.103 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.103 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:40.103 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.103 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.103 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.103 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.103 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.103 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.103 14:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.103 14:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.103 14:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.103 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.103 "name": "raid_bdev1", 00:11:40.103 "uuid": "81e3722c-a460-47de-99ed-8634c7025d9e", 00:11:40.103 "strip_size_kb": 0, 00:11:40.103 "state": "online", 00:11:40.103 "raid_level": "raid1", 00:11:40.103 "superblock": true, 00:11:40.103 "num_base_bdevs": 3, 00:11:40.103 "num_base_bdevs_discovered": 3, 00:11:40.103 "num_base_bdevs_operational": 3, 00:11:40.103 "base_bdevs_list": [ 00:11:40.103 { 00:11:40.103 "name": "pt1", 00:11:40.103 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:40.103 "is_configured": true, 00:11:40.103 "data_offset": 2048, 00:11:40.103 "data_size": 63488 00:11:40.103 }, 00:11:40.103 { 00:11:40.103 "name": "pt2", 00:11:40.103 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:40.103 "is_configured": true, 00:11:40.103 "data_offset": 2048, 00:11:40.103 "data_size": 63488 00:11:40.103 }, 00:11:40.103 { 00:11:40.103 "name": "pt3", 00:11:40.103 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:40.103 "is_configured": true, 00:11:40.103 "data_offset": 2048, 00:11:40.103 "data_size": 63488 00:11:40.103 } 00:11:40.103 ] 00:11:40.103 }' 00:11:40.103 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.103 14:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.671 [2024-11-27 14:11:17.688428] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:40.671 "name": "raid_bdev1", 00:11:40.671 "aliases": [ 00:11:40.671 "81e3722c-a460-47de-99ed-8634c7025d9e" 00:11:40.671 ], 00:11:40.671 "product_name": "Raid Volume", 00:11:40.671 "block_size": 512, 00:11:40.671 "num_blocks": 63488, 00:11:40.671 "uuid": "81e3722c-a460-47de-99ed-8634c7025d9e", 00:11:40.671 "assigned_rate_limits": { 00:11:40.671 "rw_ios_per_sec": 0, 00:11:40.671 "rw_mbytes_per_sec": 0, 00:11:40.671 "r_mbytes_per_sec": 0, 00:11:40.671 "w_mbytes_per_sec": 0 00:11:40.671 }, 00:11:40.671 "claimed": false, 00:11:40.671 "zoned": false, 00:11:40.671 "supported_io_types": { 00:11:40.671 "read": true, 00:11:40.671 "write": true, 00:11:40.671 "unmap": false, 00:11:40.671 "flush": false, 00:11:40.671 "reset": true, 00:11:40.671 "nvme_admin": false, 00:11:40.671 "nvme_io": false, 00:11:40.671 "nvme_io_md": false, 00:11:40.671 "write_zeroes": true, 00:11:40.671 "zcopy": false, 00:11:40.671 "get_zone_info": false, 00:11:40.671 "zone_management": false, 00:11:40.671 "zone_append": false, 00:11:40.671 "compare": false, 00:11:40.671 "compare_and_write": false, 00:11:40.671 "abort": false, 00:11:40.671 "seek_hole": false, 00:11:40.671 "seek_data": false, 00:11:40.671 "copy": false, 00:11:40.671 "nvme_iov_md": false 00:11:40.671 }, 00:11:40.671 "memory_domains": [ 00:11:40.671 { 00:11:40.671 "dma_device_id": "system", 00:11:40.671 "dma_device_type": 1 00:11:40.671 }, 00:11:40.671 { 00:11:40.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.671 "dma_device_type": 2 00:11:40.671 }, 00:11:40.671 { 00:11:40.671 "dma_device_id": "system", 00:11:40.671 "dma_device_type": 1 00:11:40.671 }, 00:11:40.671 { 00:11:40.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.671 "dma_device_type": 2 00:11:40.671 }, 00:11:40.671 { 00:11:40.671 "dma_device_id": "system", 00:11:40.671 "dma_device_type": 1 00:11:40.671 }, 00:11:40.671 { 00:11:40.671 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.671 "dma_device_type": 2 00:11:40.671 } 00:11:40.671 ], 00:11:40.671 "driver_specific": { 00:11:40.671 "raid": { 00:11:40.671 "uuid": "81e3722c-a460-47de-99ed-8634c7025d9e", 00:11:40.671 "strip_size_kb": 0, 00:11:40.671 "state": "online", 00:11:40.671 "raid_level": "raid1", 00:11:40.671 "superblock": true, 00:11:40.671 "num_base_bdevs": 3, 00:11:40.671 "num_base_bdevs_discovered": 3, 00:11:40.671 "num_base_bdevs_operational": 3, 00:11:40.671 "base_bdevs_list": [ 00:11:40.671 { 00:11:40.671 "name": "pt1", 00:11:40.671 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:40.671 "is_configured": true, 00:11:40.671 "data_offset": 2048, 00:11:40.671 "data_size": 63488 00:11:40.671 }, 00:11:40.671 { 00:11:40.671 "name": "pt2", 00:11:40.671 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:40.671 "is_configured": true, 00:11:40.671 "data_offset": 2048, 00:11:40.671 "data_size": 63488 00:11:40.671 }, 00:11:40.671 { 00:11:40.671 "name": "pt3", 00:11:40.671 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:40.671 "is_configured": true, 00:11:40.671 "data_offset": 2048, 00:11:40.671 "data_size": 63488 00:11:40.671 } 00:11:40.671 ] 00:11:40.671 } 00:11:40.671 } 00:11:40.671 }' 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:11:40.671 pt2 00:11:40.671 pt3' 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.671 14:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.930 14:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.930 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:40.930 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:40.930 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:40.930 14:11:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:11:40.930 14:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.930 14:11:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.930 [2024-11-27 14:11:18.004478] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:40.930 14:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.930 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 81e3722c-a460-47de-99ed-8634c7025d9e '!=' 81e3722c-a460-47de-99ed-8634c7025d9e ']' 00:11:40.930 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:11:40.930 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:40.930 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:40.930 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:11:40.931 14:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.931 14:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.931 [2024-11-27 14:11:18.052188] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:11:40.931 14:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.931 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:40.931 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:40.931 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:40.931 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:40.931 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:40.931 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:40.931 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:40.931 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:40.931 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:40.931 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:40.931 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:40.931 14:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.931 14:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.931 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:40.931 14:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.931 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:40.931 "name": "raid_bdev1", 00:11:40.931 "uuid": "81e3722c-a460-47de-99ed-8634c7025d9e", 00:11:40.931 "strip_size_kb": 0, 00:11:40.931 "state": "online", 00:11:40.931 "raid_level": "raid1", 00:11:40.931 "superblock": true, 00:11:40.931 "num_base_bdevs": 3, 00:11:40.931 "num_base_bdevs_discovered": 2, 00:11:40.931 "num_base_bdevs_operational": 2, 00:11:40.931 "base_bdevs_list": [ 00:11:40.931 { 00:11:40.931 "name": null, 00:11:40.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:40.931 "is_configured": false, 00:11:40.931 "data_offset": 0, 00:11:40.931 "data_size": 63488 00:11:40.931 }, 00:11:40.931 { 00:11:40.931 "name": "pt2", 00:11:40.931 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:40.931 "is_configured": true, 00:11:40.931 "data_offset": 2048, 00:11:40.931 "data_size": 63488 00:11:40.931 }, 00:11:40.931 { 00:11:40.931 "name": "pt3", 00:11:40.931 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:40.931 "is_configured": true, 00:11:40.931 "data_offset": 2048, 00:11:40.931 "data_size": 63488 00:11:40.931 } 00:11:40.931 ] 00:11:40.931 }' 00:11:40.931 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:40.931 14:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.578 [2024-11-27 14:11:18.600356] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:41.578 [2024-11-27 14:11:18.600392] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:41.578 [2024-11-27 14:11:18.600489] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:41.578 [2024-11-27 14:11:18.600580] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:41.578 [2024-11-27 14:11:18.600617] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.578 [2024-11-27 14:11:18.688310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:41.578 [2024-11-27 14:11:18.688398] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:41.578 [2024-11-27 14:11:18.688423] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:11:41.578 [2024-11-27 14:11:18.688439] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:41.578 [2024-11-27 14:11:18.691425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:41.578 [2024-11-27 14:11:18.691640] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:41.578 [2024-11-27 14:11:18.691756] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:41.578 [2024-11-27 14:11:18.691848] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:41.578 pt2 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:41.578 "name": "raid_bdev1", 00:11:41.578 "uuid": "81e3722c-a460-47de-99ed-8634c7025d9e", 00:11:41.578 "strip_size_kb": 0, 00:11:41.578 "state": "configuring", 00:11:41.578 "raid_level": "raid1", 00:11:41.578 "superblock": true, 00:11:41.578 "num_base_bdevs": 3, 00:11:41.578 "num_base_bdevs_discovered": 1, 00:11:41.578 "num_base_bdevs_operational": 2, 00:11:41.578 "base_bdevs_list": [ 00:11:41.578 { 00:11:41.578 "name": null, 00:11:41.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.578 "is_configured": false, 00:11:41.578 "data_offset": 2048, 00:11:41.578 "data_size": 63488 00:11:41.578 }, 00:11:41.578 { 00:11:41.578 "name": "pt2", 00:11:41.578 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:41.578 "is_configured": true, 00:11:41.578 "data_offset": 2048, 00:11:41.578 "data_size": 63488 00:11:41.578 }, 00:11:41.578 { 00:11:41.578 "name": null, 00:11:41.578 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:41.578 "is_configured": false, 00:11:41.578 "data_offset": 2048, 00:11:41.578 "data_size": 63488 00:11:41.578 } 00:11:41.578 ] 00:11:41.578 }' 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:41.578 14:11:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.154 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:11:42.154 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:11:42.154 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:11:42.154 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:42.154 14:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.154 14:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.154 [2024-11-27 14:11:19.240519] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:42.154 [2024-11-27 14:11:19.240600] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.154 [2024-11-27 14:11:19.240630] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:11:42.154 [2024-11-27 14:11:19.240653] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.154 [2024-11-27 14:11:19.241307] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.154 [2024-11-27 14:11:19.241364] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:42.154 [2024-11-27 14:11:19.241511] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:42.154 [2024-11-27 14:11:19.241569] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:42.154 [2024-11-27 14:11:19.241791] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:42.154 [2024-11-27 14:11:19.241820] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:42.154 [2024-11-27 14:11:19.242181] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:11:42.154 [2024-11-27 14:11:19.242392] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:42.155 [2024-11-27 14:11:19.242409] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:42.155 [2024-11-27 14:11:19.242608] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:42.155 pt3 00:11:42.155 14:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.155 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:42.155 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.155 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:42.155 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.155 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.155 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:42.155 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.155 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.155 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.155 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.155 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.155 14:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.155 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.155 14:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.155 14:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.155 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.155 "name": "raid_bdev1", 00:11:42.155 "uuid": "81e3722c-a460-47de-99ed-8634c7025d9e", 00:11:42.155 "strip_size_kb": 0, 00:11:42.156 "state": "online", 00:11:42.156 "raid_level": "raid1", 00:11:42.156 "superblock": true, 00:11:42.156 "num_base_bdevs": 3, 00:11:42.156 "num_base_bdevs_discovered": 2, 00:11:42.156 "num_base_bdevs_operational": 2, 00:11:42.156 "base_bdevs_list": [ 00:11:42.156 { 00:11:42.156 "name": null, 00:11:42.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.156 "is_configured": false, 00:11:42.156 "data_offset": 2048, 00:11:42.156 "data_size": 63488 00:11:42.156 }, 00:11:42.156 { 00:11:42.156 "name": "pt2", 00:11:42.156 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:42.156 "is_configured": true, 00:11:42.156 "data_offset": 2048, 00:11:42.156 "data_size": 63488 00:11:42.156 }, 00:11:42.156 { 00:11:42.156 "name": "pt3", 00:11:42.156 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:42.156 "is_configured": true, 00:11:42.156 "data_offset": 2048, 00:11:42.156 "data_size": 63488 00:11:42.156 } 00:11:42.156 ] 00:11:42.156 }' 00:11:42.156 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.156 14:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.726 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:42.726 14:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.726 14:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.726 [2024-11-27 14:11:19.772648] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:42.726 [2024-11-27 14:11:19.772693] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:42.726 [2024-11-27 14:11:19.772796] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:42.726 [2024-11-27 14:11:19.773125] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:42.726 [2024-11-27 14:11:19.773142] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:42.726 14:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.726 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.726 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:11:42.726 14:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.726 14:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.726 14:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.726 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:11:42.726 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:11:42.726 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:11:42.726 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:11:42.726 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:11:42.726 14:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.726 14:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.726 14:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.726 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:42.726 14:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.726 14:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.726 [2024-11-27 14:11:19.844706] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:42.726 [2024-11-27 14:11:19.844825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:42.726 [2024-11-27 14:11:19.844862] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:11:42.726 [2024-11-27 14:11:19.844877] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:42.726 [2024-11-27 14:11:19.848040] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:42.727 [2024-11-27 14:11:19.848096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:42.727 [2024-11-27 14:11:19.848215] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:42.727 [2024-11-27 14:11:19.848278] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:42.727 [2024-11-27 14:11:19.848442] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:11:42.727 [2024-11-27 14:11:19.848460] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:42.727 [2024-11-27 14:11:19.848482] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:11:42.727 [2024-11-27 14:11:19.848552] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:42.727 pt1 00:11:42.727 14:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.727 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:11:42.727 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:11:42.727 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:42.727 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:42.727 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:42.727 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:42.727 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:42.727 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:42.727 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:42.727 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:42.727 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:42.727 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:42.727 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:42.727 14:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:42.727 14:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.727 14:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:42.727 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:42.727 "name": "raid_bdev1", 00:11:42.727 "uuid": "81e3722c-a460-47de-99ed-8634c7025d9e", 00:11:42.727 "strip_size_kb": 0, 00:11:42.727 "state": "configuring", 00:11:42.727 "raid_level": "raid1", 00:11:42.727 "superblock": true, 00:11:42.727 "num_base_bdevs": 3, 00:11:42.727 "num_base_bdevs_discovered": 1, 00:11:42.727 "num_base_bdevs_operational": 2, 00:11:42.727 "base_bdevs_list": [ 00:11:42.727 { 00:11:42.727 "name": null, 00:11:42.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:42.727 "is_configured": false, 00:11:42.727 "data_offset": 2048, 00:11:42.727 "data_size": 63488 00:11:42.727 }, 00:11:42.727 { 00:11:42.727 "name": "pt2", 00:11:42.727 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:42.727 "is_configured": true, 00:11:42.727 "data_offset": 2048, 00:11:42.727 "data_size": 63488 00:11:42.727 }, 00:11:42.727 { 00:11:42.727 "name": null, 00:11:42.727 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:42.727 "is_configured": false, 00:11:42.727 "data_offset": 2048, 00:11:42.727 "data_size": 63488 00:11:42.727 } 00:11:42.727 ] 00:11:42.727 }' 00:11:42.727 14:11:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:42.727 14:11:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.295 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:11:43.295 14:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.295 14:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.295 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:43.295 14:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.295 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:11:43.295 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:43.295 14:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.295 14:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.295 [2024-11-27 14:11:20.420904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:43.295 [2024-11-27 14:11:20.420985] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:43.295 [2024-11-27 14:11:20.421018] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:11:43.295 [2024-11-27 14:11:20.421033] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:43.295 [2024-11-27 14:11:20.421618] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:43.295 [2024-11-27 14:11:20.421658] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:43.295 [2024-11-27 14:11:20.421793] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:43.295 [2024-11-27 14:11:20.421826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:43.295 [2024-11-27 14:11:20.421987] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:11:43.295 [2024-11-27 14:11:20.422006] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:43.295 [2024-11-27 14:11:20.422508] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:11:43.295 [2024-11-27 14:11:20.422760] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:11:43.295 [2024-11-27 14:11:20.422807] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:11:43.295 [2024-11-27 14:11:20.423017] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:43.295 pt3 00:11:43.295 14:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.295 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:43.295 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:43.295 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:43.295 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:43.295 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:43.295 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:43.295 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:43.295 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:43.295 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:43.295 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:43.295 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:43.295 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:43.295 14:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.295 14:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.295 14:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.295 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:43.295 "name": "raid_bdev1", 00:11:43.295 "uuid": "81e3722c-a460-47de-99ed-8634c7025d9e", 00:11:43.295 "strip_size_kb": 0, 00:11:43.295 "state": "online", 00:11:43.295 "raid_level": "raid1", 00:11:43.295 "superblock": true, 00:11:43.295 "num_base_bdevs": 3, 00:11:43.295 "num_base_bdevs_discovered": 2, 00:11:43.295 "num_base_bdevs_operational": 2, 00:11:43.295 "base_bdevs_list": [ 00:11:43.295 { 00:11:43.295 "name": null, 00:11:43.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:43.295 "is_configured": false, 00:11:43.295 "data_offset": 2048, 00:11:43.295 "data_size": 63488 00:11:43.295 }, 00:11:43.295 { 00:11:43.296 "name": "pt2", 00:11:43.296 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:43.296 "is_configured": true, 00:11:43.296 "data_offset": 2048, 00:11:43.296 "data_size": 63488 00:11:43.296 }, 00:11:43.296 { 00:11:43.296 "name": "pt3", 00:11:43.296 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:43.296 "is_configured": true, 00:11:43.296 "data_offset": 2048, 00:11:43.296 "data_size": 63488 00:11:43.296 } 00:11:43.296 ] 00:11:43.296 }' 00:11:43.296 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:43.296 14:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.864 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:11:43.864 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:11:43.864 14:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.864 14:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.864 14:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.864 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:11:43.864 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:11:43.864 14:11:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:11:43.864 14:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:43.864 14:11:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:43.864 [2024-11-27 14:11:21.001452] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:43.864 14:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:43.864 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 81e3722c-a460-47de-99ed-8634c7025d9e '!=' 81e3722c-a460-47de-99ed-8634c7025d9e ']' 00:11:43.864 14:11:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 68617 00:11:43.864 14:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 68617 ']' 00:11:43.864 14:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 68617 00:11:43.864 14:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:11:43.864 14:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:43.864 14:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 68617 00:11:43.864 killing process with pid 68617 00:11:43.864 14:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:43.864 14:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:43.864 14:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 68617' 00:11:43.864 14:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 68617 00:11:43.864 [2024-11-27 14:11:21.080725] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:43.864 [2024-11-27 14:11:21.080854] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:43.864 14:11:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 68617 00:11:43.864 [2024-11-27 14:11:21.080934] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:43.864 [2024-11-27 14:11:21.080952] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:11:44.123 [2024-11-27 14:11:21.348378] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:45.501 14:11:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:11:45.501 00:11:45.501 real 0m8.798s 00:11:45.501 user 0m14.427s 00:11:45.501 sys 0m1.248s 00:11:45.501 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:45.501 ************************************ 00:11:45.501 END TEST raid_superblock_test 00:11:45.501 ************************************ 00:11:45.501 14:11:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.501 14:11:22 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:11:45.501 14:11:22 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:45.501 14:11:22 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:45.501 14:11:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:45.501 ************************************ 00:11:45.501 START TEST raid_read_error_test 00:11:45.501 ************************************ 00:11:45.501 14:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 read 00:11:45.501 14:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:45.501 14:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:45.501 14:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:11:45.501 14:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:45.501 14:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.501 14:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:45.502 14:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:45.502 14:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.502 14:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:45.502 14:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:45.502 14:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.502 14:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:45.502 14:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:45.502 14:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:45.502 14:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:45.502 14:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:45.502 14:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:45.502 14:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:45.502 14:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:45.502 14:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:45.502 14:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:45.502 14:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:45.502 14:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:45.502 14:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:45.502 14:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.TzNJnxO0vv 00:11:45.502 14:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69068 00:11:45.502 14:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69068 00:11:45.502 14:11:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:45.502 14:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 69068 ']' 00:11:45.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:45.502 14:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:45.502 14:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:45.502 14:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:45.502 14:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:45.502 14:11:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.502 [2024-11-27 14:11:22.569192] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:11:45.502 [2024-11-27 14:11:22.569710] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69068 ] 00:11:45.502 [2024-11-27 14:11:22.757580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:45.761 [2024-11-27 14:11:22.889859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:46.033 [2024-11-27 14:11:23.092452] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:46.033 [2024-11-27 14:11:23.092498] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:46.320 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:46.320 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:46.320 14:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:46.320 14:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:46.320 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.320 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.320 BaseBdev1_malloc 00:11:46.320 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.320 14:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:46.320 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.320 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.320 true 00:11:46.320 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.320 14:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:46.320 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.320 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.320 [2024-11-27 14:11:23.565083] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:46.320 [2024-11-27 14:11:23.565157] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.320 [2024-11-27 14:11:23.565189] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:46.320 [2024-11-27 14:11:23.565223] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.320 [2024-11-27 14:11:23.568249] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.320 [2024-11-27 14:11:23.568300] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:46.320 BaseBdev1 00:11:46.320 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.320 14:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:46.320 14:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:46.320 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.320 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.580 BaseBdev2_malloc 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.580 true 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.580 [2024-11-27 14:11:23.621774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:46.580 [2024-11-27 14:11:23.621858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.580 [2024-11-27 14:11:23.621887] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:46.580 [2024-11-27 14:11:23.621905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.580 [2024-11-27 14:11:23.624689] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.580 [2024-11-27 14:11:23.624743] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:46.580 BaseBdev2 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.580 BaseBdev3_malloc 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.580 true 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.580 [2024-11-27 14:11:23.687852] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:46.580 [2024-11-27 14:11:23.687924] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:46.580 [2024-11-27 14:11:23.687953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:46.580 [2024-11-27 14:11:23.687972] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:46.580 [2024-11-27 14:11:23.690911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:46.580 [2024-11-27 14:11:23.691105] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:46.580 BaseBdev3 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.580 [2024-11-27 14:11:23.700048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:46.580 [2024-11-27 14:11:23.702936] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:46.580 [2024-11-27 14:11:23.703188] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:46.580 [2024-11-27 14:11:23.703616] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:46.580 [2024-11-27 14:11:23.703759] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:46.580 [2024-11-27 14:11:23.704141] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:46.580 [2024-11-27 14:11:23.704393] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:46.580 [2024-11-27 14:11:23.704414] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:46.580 [2024-11-27 14:11:23.704681] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:46.580 "name": "raid_bdev1", 00:11:46.580 "uuid": "badbd10d-3108-4f34-ae61-6a7bbc8bc3c5", 00:11:46.580 "strip_size_kb": 0, 00:11:46.580 "state": "online", 00:11:46.580 "raid_level": "raid1", 00:11:46.580 "superblock": true, 00:11:46.580 "num_base_bdevs": 3, 00:11:46.580 "num_base_bdevs_discovered": 3, 00:11:46.580 "num_base_bdevs_operational": 3, 00:11:46.580 "base_bdevs_list": [ 00:11:46.580 { 00:11:46.580 "name": "BaseBdev1", 00:11:46.580 "uuid": "797f09b4-3849-5f10-84ac-a87e4614045e", 00:11:46.580 "is_configured": true, 00:11:46.580 "data_offset": 2048, 00:11:46.580 "data_size": 63488 00:11:46.580 }, 00:11:46.580 { 00:11:46.580 "name": "BaseBdev2", 00:11:46.580 "uuid": "ce2aaf15-0316-5343-96bf-7db0ca5978a7", 00:11:46.580 "is_configured": true, 00:11:46.580 "data_offset": 2048, 00:11:46.580 "data_size": 63488 00:11:46.580 }, 00:11:46.580 { 00:11:46.580 "name": "BaseBdev3", 00:11:46.580 "uuid": "a81759c3-d6ad-5183-a72f-bc785ea8c3ea", 00:11:46.580 "is_configured": true, 00:11:46.580 "data_offset": 2048, 00:11:46.580 "data_size": 63488 00:11:46.580 } 00:11:46.580 ] 00:11:46.580 }' 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:46.580 14:11:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:47.149 14:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:47.149 14:11:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:47.149 [2024-11-27 14:11:24.302142] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:48.087 14:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:48.087 14:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.087 14:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.087 14:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.087 14:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:48.087 14:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:48.087 14:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:11:48.087 14:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=3 00:11:48.087 14:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:48.087 14:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:48.087 14:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:48.087 14:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:48.087 14:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:48.087 14:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:48.087 14:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:48.087 14:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:48.087 14:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:48.087 14:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:48.087 14:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:48.087 14:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:48.087 14:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.087 14:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.087 14:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.087 14:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:48.087 "name": "raid_bdev1", 00:11:48.087 "uuid": "badbd10d-3108-4f34-ae61-6a7bbc8bc3c5", 00:11:48.087 "strip_size_kb": 0, 00:11:48.087 "state": "online", 00:11:48.087 "raid_level": "raid1", 00:11:48.087 "superblock": true, 00:11:48.087 "num_base_bdevs": 3, 00:11:48.087 "num_base_bdevs_discovered": 3, 00:11:48.087 "num_base_bdevs_operational": 3, 00:11:48.087 "base_bdevs_list": [ 00:11:48.088 { 00:11:48.088 "name": "BaseBdev1", 00:11:48.088 "uuid": "797f09b4-3849-5f10-84ac-a87e4614045e", 00:11:48.088 "is_configured": true, 00:11:48.088 "data_offset": 2048, 00:11:48.088 "data_size": 63488 00:11:48.088 }, 00:11:48.088 { 00:11:48.088 "name": "BaseBdev2", 00:11:48.088 "uuid": "ce2aaf15-0316-5343-96bf-7db0ca5978a7", 00:11:48.088 "is_configured": true, 00:11:48.088 "data_offset": 2048, 00:11:48.088 "data_size": 63488 00:11:48.088 }, 00:11:48.088 { 00:11:48.088 "name": "BaseBdev3", 00:11:48.088 "uuid": "a81759c3-d6ad-5183-a72f-bc785ea8c3ea", 00:11:48.088 "is_configured": true, 00:11:48.088 "data_offset": 2048, 00:11:48.088 "data_size": 63488 00:11:48.088 } 00:11:48.088 ] 00:11:48.088 }' 00:11:48.088 14:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:48.088 14:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.656 14:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:48.656 14:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.656 14:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.656 [2024-11-27 14:11:25.723511] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:48.656 [2024-11-27 14:11:25.723679] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:48.656 [2024-11-27 14:11:25.727504] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:48.656 [2024-11-27 14:11:25.727755] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:48.656 [2024-11-27 14:11:25.728039] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:48.656 [2024-11-27 14:11:25.728202] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:48.656 { 00:11:48.656 "results": [ 00:11:48.656 { 00:11:48.656 "job": "raid_bdev1", 00:11:48.656 "core_mask": "0x1", 00:11:48.656 "workload": "randrw", 00:11:48.656 "percentage": 50, 00:11:48.656 "status": "finished", 00:11:48.656 "queue_depth": 1, 00:11:48.656 "io_size": 131072, 00:11:48.656 "runtime": 1.419117, 00:11:48.656 "iops": 8913.993701717336, 00:11:48.656 "mibps": 1114.249212714667, 00:11:48.656 "io_failed": 0, 00:11:48.656 "io_timeout": 0, 00:11:48.656 "avg_latency_us": 107.69146388789076, 00:11:48.656 "min_latency_us": 39.56363636363636, 00:11:48.656 "max_latency_us": 2010.7636363636364 00:11:48.656 } 00:11:48.656 ], 00:11:48.656 "core_count": 1 00:11:48.656 } 00:11:48.656 14:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.656 14:11:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69068 00:11:48.656 14:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 69068 ']' 00:11:48.656 14:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 69068 00:11:48.656 14:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:11:48.656 14:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:48.656 14:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69068 00:11:48.656 14:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:48.656 14:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:48.656 14:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69068' 00:11:48.656 killing process with pid 69068 00:11:48.656 14:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 69068 00:11:48.656 [2024-11-27 14:11:25.767726] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:48.657 14:11:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 69068 00:11:48.916 [2024-11-27 14:11:25.985553] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:50.293 14:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.TzNJnxO0vv 00:11:50.293 14:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:50.293 14:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:50.293 14:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:50.293 14:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:50.293 14:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:50.293 14:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:50.293 14:11:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:50.293 00:11:50.293 real 0m4.723s 00:11:50.293 user 0m5.786s 00:11:50.293 sys 0m0.592s 00:11:50.293 14:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:50.293 ************************************ 00:11:50.293 END TEST raid_read_error_test 00:11:50.293 ************************************ 00:11:50.293 14:11:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.293 14:11:27 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:11:50.293 14:11:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:50.293 14:11:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:50.293 14:11:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:50.293 ************************************ 00:11:50.293 START TEST raid_write_error_test 00:11:50.294 ************************************ 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 3 write 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=3 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.oS4Wp1X7x5 00:11:50.294 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=69219 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 69219 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 69219 ']' 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:50.294 14:11:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.294 [2024-11-27 14:11:27.318539] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:11:50.294 [2024-11-27 14:11:27.318738] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69219 ] 00:11:50.294 [2024-11-27 14:11:27.494670] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:50.553 [2024-11-27 14:11:27.712104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:50.847 [2024-11-27 14:11:27.959198] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:50.847 [2024-11-27 14:11:27.959296] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:51.413 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:51.413 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:11:51.413 14:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:51.413 14:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:51.413 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.413 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.413 BaseBdev1_malloc 00:11:51.413 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.413 14:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:11:51.413 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.413 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.413 true 00:11:51.413 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.413 14:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:51.413 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.413 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.413 [2024-11-27 14:11:28.543265] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:51.413 [2024-11-27 14:11:28.543341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.413 [2024-11-27 14:11:28.543374] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:11:51.413 [2024-11-27 14:11:28.543392] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.413 [2024-11-27 14:11:28.546184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.413 BaseBdev1 00:11:51.413 [2024-11-27 14:11:28.546397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:51.413 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.413 14:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:51.413 14:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:51.413 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.413 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.413 BaseBdev2_malloc 00:11:51.413 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.413 14:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:11:51.413 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.413 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.413 true 00:11:51.413 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.413 14:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:51.413 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.413 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.413 [2024-11-27 14:11:28.609658] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:51.413 [2024-11-27 14:11:28.609954] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.413 [2024-11-27 14:11:28.610024] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:11:51.413 [2024-11-27 14:11:28.610050] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.413 [2024-11-27 14:11:28.613891] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.413 [2024-11-27 14:11:28.613954] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:51.413 BaseBdev2 00:11:51.413 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.413 14:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:11:51.413 14:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:51.413 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.413 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.414 BaseBdev3_malloc 00:11:51.414 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.414 14:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:11:51.414 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.414 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.414 true 00:11:51.414 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.414 14:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:51.414 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.414 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.414 [2024-11-27 14:11:28.685553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:51.414 [2024-11-27 14:11:28.685638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:51.414 [2024-11-27 14:11:28.685676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:11:51.414 [2024-11-27 14:11:28.685699] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:51.414 [2024-11-27 14:11:28.689052] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:51.414 [2024-11-27 14:11:28.689114] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:51.672 BaseBdev3 00:11:51.672 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.672 14:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 -s 00:11:51.672 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.672 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.672 [2024-11-27 14:11:28.693952] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:51.672 [2024-11-27 14:11:28.696943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:51.672 [2024-11-27 14:11:28.697213] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:51.672 [2024-11-27 14:11:28.697565] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:11:51.672 [2024-11-27 14:11:28.697589] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:11:51.672 [2024-11-27 14:11:28.698100] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006560 00:11:51.672 [2024-11-27 14:11:28.698396] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:11:51.672 [2024-11-27 14:11:28.698430] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:11:51.672 [2024-11-27 14:11:28.698761] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:51.672 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.672 14:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:11:51.672 14:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:51.672 14:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:51.672 14:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:51.672 14:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:51.672 14:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:11:51.672 14:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:51.672 14:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:51.672 14:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:51.672 14:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:51.672 14:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:51.672 14:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:51.672 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:51.672 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:51.672 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:51.672 14:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:51.672 "name": "raid_bdev1", 00:11:51.672 "uuid": "a2e9f16e-f378-4025-a7fa-796c9364d748", 00:11:51.672 "strip_size_kb": 0, 00:11:51.672 "state": "online", 00:11:51.672 "raid_level": "raid1", 00:11:51.672 "superblock": true, 00:11:51.672 "num_base_bdevs": 3, 00:11:51.672 "num_base_bdevs_discovered": 3, 00:11:51.672 "num_base_bdevs_operational": 3, 00:11:51.672 "base_bdevs_list": [ 00:11:51.672 { 00:11:51.672 "name": "BaseBdev1", 00:11:51.672 "uuid": "dc5e8910-8108-5d4f-81cb-1801687aabf5", 00:11:51.672 "is_configured": true, 00:11:51.672 "data_offset": 2048, 00:11:51.672 "data_size": 63488 00:11:51.672 }, 00:11:51.672 { 00:11:51.672 "name": "BaseBdev2", 00:11:51.672 "uuid": "b44f1d44-0f1f-59ee-9345-88b26273a7c4", 00:11:51.672 "is_configured": true, 00:11:51.672 "data_offset": 2048, 00:11:51.672 "data_size": 63488 00:11:51.672 }, 00:11:51.672 { 00:11:51.672 "name": "BaseBdev3", 00:11:51.672 "uuid": "e79261e1-35d6-57b7-853b-e258077a53f8", 00:11:51.672 "is_configured": true, 00:11:51.672 "data_offset": 2048, 00:11:51.672 "data_size": 63488 00:11:51.672 } 00:11:51.673 ] 00:11:51.673 }' 00:11:51.673 14:11:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:51.673 14:11:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.238 14:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:11:52.238 14:11:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:11:52.238 [2024-11-27 14:11:29.356733] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006700 00:11:53.171 14:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:53.171 14:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.171 14:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.171 [2024-11-27 14:11:30.227669] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:11:53.171 [2024-11-27 14:11:30.227746] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:53.171 [2024-11-27 14:11:30.228049] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006700 00:11:53.171 14:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.171 14:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:11:53.171 14:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:11:53.172 14:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:11:53.172 14:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:11:53.172 14:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:11:53.172 14:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:11:53.172 14:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:53.172 14:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:11:53.172 14:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:11:53.172 14:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:11:53.172 14:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:53.172 14:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:53.172 14:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:53.172 14:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:53.172 14:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:53.172 14:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.172 14:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:53.172 14:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.172 14:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.172 14:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:53.172 "name": "raid_bdev1", 00:11:53.172 "uuid": "a2e9f16e-f378-4025-a7fa-796c9364d748", 00:11:53.172 "strip_size_kb": 0, 00:11:53.172 "state": "online", 00:11:53.172 "raid_level": "raid1", 00:11:53.172 "superblock": true, 00:11:53.172 "num_base_bdevs": 3, 00:11:53.172 "num_base_bdevs_discovered": 2, 00:11:53.172 "num_base_bdevs_operational": 2, 00:11:53.172 "base_bdevs_list": [ 00:11:53.172 { 00:11:53.172 "name": null, 00:11:53.172 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:53.172 "is_configured": false, 00:11:53.172 "data_offset": 0, 00:11:53.172 "data_size": 63488 00:11:53.172 }, 00:11:53.172 { 00:11:53.172 "name": "BaseBdev2", 00:11:53.172 "uuid": "b44f1d44-0f1f-59ee-9345-88b26273a7c4", 00:11:53.172 "is_configured": true, 00:11:53.172 "data_offset": 2048, 00:11:53.172 "data_size": 63488 00:11:53.172 }, 00:11:53.172 { 00:11:53.172 "name": "BaseBdev3", 00:11:53.172 "uuid": "e79261e1-35d6-57b7-853b-e258077a53f8", 00:11:53.172 "is_configured": true, 00:11:53.172 "data_offset": 2048, 00:11:53.172 "data_size": 63488 00:11:53.172 } 00:11:53.172 ] 00:11:53.172 }' 00:11:53.172 14:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:53.172 14:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.430 14:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:11:53.430 14:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:53.431 14:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.431 [2024-11-27 14:11:30.690257] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:53.431 [2024-11-27 14:11:30.690519] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:53.431 [2024-11-27 14:11:30.694409] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:53.431 [2024-11-27 14:11:30.694720] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:53.431 [2024-11-27 14:11:30.694974] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to fr{ 00:11:53.431 "results": [ 00:11:53.431 { 00:11:53.431 "job": "raid_bdev1", 00:11:53.431 "core_mask": "0x1", 00:11:53.431 "workload": "randrw", 00:11:53.431 "percentage": 50, 00:11:53.431 "status": "finished", 00:11:53.431 "queue_depth": 1, 00:11:53.431 "io_size": 131072, 00:11:53.431 "runtime": 1.330599, 00:11:53.431 "iops": 9349.172816152724, 00:11:53.431 "mibps": 1168.6466020190906, 00:11:53.431 "io_failed": 0, 00:11:53.431 "io_timeout": 0, 00:11:53.431 "avg_latency_us": 101.99367202572347, 00:11:53.431 "min_latency_us": 39.09818181818182, 00:11:53.431 "max_latency_us": 2085.2363636363634 00:11:53.431 } 00:11:53.431 ], 00:11:53.431 "core_count": 1 00:11:53.431 } 00:11:53.431 ee all in destruct 00:11:53.431 [2024-11-27 14:11:30.695226] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:11:53.431 14:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:53.431 14:11:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 69219 00:11:53.431 14:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 69219 ']' 00:11:53.431 14:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 69219 00:11:53.431 14:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:11:53.431 14:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:53.690 14:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69219 00:11:53.690 14:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:53.690 14:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:53.690 14:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69219' 00:11:53.690 killing process with pid 69219 00:11:53.690 14:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 69219 00:11:53.690 [2024-11-27 14:11:30.728670] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:53.690 14:11:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 69219 00:11:53.690 [2024-11-27 14:11:30.944156] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:55.070 14:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.oS4Wp1X7x5 00:11:55.070 14:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:11:55.070 14:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:11:55.070 ************************************ 00:11:55.070 END TEST raid_write_error_test 00:11:55.070 ************************************ 00:11:55.070 14:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:11:55.070 14:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:11:55.070 14:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:11:55.070 14:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:11:55.070 14:11:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:11:55.070 00:11:55.070 real 0m4.826s 00:11:55.070 user 0m6.030s 00:11:55.070 sys 0m0.572s 00:11:55.070 14:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:55.070 14:11:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.070 14:11:32 bdev_raid -- bdev/bdev_raid.sh@966 -- # for n in {2..4} 00:11:55.070 14:11:32 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:11:55.070 14:11:32 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:11:55.070 14:11:32 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:11:55.070 14:11:32 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:55.070 14:11:32 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:55.070 ************************************ 00:11:55.070 START TEST raid_state_function_test 00:11:55.070 ************************************ 00:11:55.070 14:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 false 00:11:55.070 14:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:11:55.070 14:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:11:55.070 14:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:11:55.070 14:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:11:55.071 Process raid pid: 69363 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=69363 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 69363' 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 69363 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 69363 ']' 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:55.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:55.071 14:11:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.071 [2024-11-27 14:11:32.222434] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:11:55.071 [2024-11-27 14:11:32.222642] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:55.330 [2024-11-27 14:11:32.411111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:55.330 [2024-11-27 14:11:32.552259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:55.589 [2024-11-27 14:11:32.761934] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:55.589 [2024-11-27 14:11:32.762008] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.157 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:56.157 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:11:56.157 14:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:56.157 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.157 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.157 [2024-11-27 14:11:33.198393] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:56.157 [2024-11-27 14:11:33.198462] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:56.157 [2024-11-27 14:11:33.198479] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:56.157 [2024-11-27 14:11:33.198496] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:56.157 [2024-11-27 14:11:33.198506] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:56.157 [2024-11-27 14:11:33.198520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:56.157 [2024-11-27 14:11:33.198530] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:56.157 [2024-11-27 14:11:33.198544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:56.157 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.157 14:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:56.157 14:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.157 14:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.157 14:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:56.157 14:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.157 14:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.157 14:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.157 14:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.157 14:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.157 14:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.157 14:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.157 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.157 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.157 14:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.157 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.157 14:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.157 "name": "Existed_Raid", 00:11:56.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.157 "strip_size_kb": 64, 00:11:56.157 "state": "configuring", 00:11:56.157 "raid_level": "raid0", 00:11:56.157 "superblock": false, 00:11:56.157 "num_base_bdevs": 4, 00:11:56.157 "num_base_bdevs_discovered": 0, 00:11:56.157 "num_base_bdevs_operational": 4, 00:11:56.157 "base_bdevs_list": [ 00:11:56.157 { 00:11:56.157 "name": "BaseBdev1", 00:11:56.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.157 "is_configured": false, 00:11:56.157 "data_offset": 0, 00:11:56.157 "data_size": 0 00:11:56.157 }, 00:11:56.157 { 00:11:56.157 "name": "BaseBdev2", 00:11:56.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.157 "is_configured": false, 00:11:56.157 "data_offset": 0, 00:11:56.157 "data_size": 0 00:11:56.157 }, 00:11:56.157 { 00:11:56.158 "name": "BaseBdev3", 00:11:56.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.158 "is_configured": false, 00:11:56.158 "data_offset": 0, 00:11:56.158 "data_size": 0 00:11:56.158 }, 00:11:56.158 { 00:11:56.158 "name": "BaseBdev4", 00:11:56.158 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.158 "is_configured": false, 00:11:56.158 "data_offset": 0, 00:11:56.158 "data_size": 0 00:11:56.158 } 00:11:56.158 ] 00:11:56.158 }' 00:11:56.158 14:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.158 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.726 14:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:56.726 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.726 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.726 [2024-11-27 14:11:33.706446] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:56.727 [2024-11-27 14:11:33.706698] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.727 [2024-11-27 14:11:33.718513] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:56.727 [2024-11-27 14:11:33.718717] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:56.727 [2024-11-27 14:11:33.718861] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:56.727 [2024-11-27 14:11:33.719006] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:56.727 [2024-11-27 14:11:33.719125] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:56.727 [2024-11-27 14:11:33.719186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:56.727 [2024-11-27 14:11:33.719388] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:56.727 [2024-11-27 14:11:33.719541] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.727 [2024-11-27 14:11:33.766492] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:56.727 BaseBdev1 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.727 [ 00:11:56.727 { 00:11:56.727 "name": "BaseBdev1", 00:11:56.727 "aliases": [ 00:11:56.727 "fcd14749-5d2f-40df-a391-abe7814e0f11" 00:11:56.727 ], 00:11:56.727 "product_name": "Malloc disk", 00:11:56.727 "block_size": 512, 00:11:56.727 "num_blocks": 65536, 00:11:56.727 "uuid": "fcd14749-5d2f-40df-a391-abe7814e0f11", 00:11:56.727 "assigned_rate_limits": { 00:11:56.727 "rw_ios_per_sec": 0, 00:11:56.727 "rw_mbytes_per_sec": 0, 00:11:56.727 "r_mbytes_per_sec": 0, 00:11:56.727 "w_mbytes_per_sec": 0 00:11:56.727 }, 00:11:56.727 "claimed": true, 00:11:56.727 "claim_type": "exclusive_write", 00:11:56.727 "zoned": false, 00:11:56.727 "supported_io_types": { 00:11:56.727 "read": true, 00:11:56.727 "write": true, 00:11:56.727 "unmap": true, 00:11:56.727 "flush": true, 00:11:56.727 "reset": true, 00:11:56.727 "nvme_admin": false, 00:11:56.727 "nvme_io": false, 00:11:56.727 "nvme_io_md": false, 00:11:56.727 "write_zeroes": true, 00:11:56.727 "zcopy": true, 00:11:56.727 "get_zone_info": false, 00:11:56.727 "zone_management": false, 00:11:56.727 "zone_append": false, 00:11:56.727 "compare": false, 00:11:56.727 "compare_and_write": false, 00:11:56.727 "abort": true, 00:11:56.727 "seek_hole": false, 00:11:56.727 "seek_data": false, 00:11:56.727 "copy": true, 00:11:56.727 "nvme_iov_md": false 00:11:56.727 }, 00:11:56.727 "memory_domains": [ 00:11:56.727 { 00:11:56.727 "dma_device_id": "system", 00:11:56.727 "dma_device_type": 1 00:11:56.727 }, 00:11:56.727 { 00:11:56.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:56.727 "dma_device_type": 2 00:11:56.727 } 00:11:56.727 ], 00:11:56.727 "driver_specific": {} 00:11:56.727 } 00:11:56.727 ] 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:56.727 "name": "Existed_Raid", 00:11:56.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.727 "strip_size_kb": 64, 00:11:56.727 "state": "configuring", 00:11:56.727 "raid_level": "raid0", 00:11:56.727 "superblock": false, 00:11:56.727 "num_base_bdevs": 4, 00:11:56.727 "num_base_bdevs_discovered": 1, 00:11:56.727 "num_base_bdevs_operational": 4, 00:11:56.727 "base_bdevs_list": [ 00:11:56.727 { 00:11:56.727 "name": "BaseBdev1", 00:11:56.727 "uuid": "fcd14749-5d2f-40df-a391-abe7814e0f11", 00:11:56.727 "is_configured": true, 00:11:56.727 "data_offset": 0, 00:11:56.727 "data_size": 65536 00:11:56.727 }, 00:11:56.727 { 00:11:56.727 "name": "BaseBdev2", 00:11:56.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.727 "is_configured": false, 00:11:56.727 "data_offset": 0, 00:11:56.727 "data_size": 0 00:11:56.727 }, 00:11:56.727 { 00:11:56.727 "name": "BaseBdev3", 00:11:56.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.727 "is_configured": false, 00:11:56.727 "data_offset": 0, 00:11:56.727 "data_size": 0 00:11:56.727 }, 00:11:56.727 { 00:11:56.727 "name": "BaseBdev4", 00:11:56.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:56.727 "is_configured": false, 00:11:56.727 "data_offset": 0, 00:11:56.727 "data_size": 0 00:11:56.727 } 00:11:56.727 ] 00:11:56.727 }' 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:56.727 14:11:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.295 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:11:57.295 14:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.295 14:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.295 [2024-11-27 14:11:34.286713] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:57.295 [2024-11-27 14:11:34.286949] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:11:57.295 14:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.295 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:11:57.295 14:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.295 14:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.295 [2024-11-27 14:11:34.294751] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:57.295 [2024-11-27 14:11:34.297333] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:57.295 [2024-11-27 14:11:34.297517] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:57.295 [2024-11-27 14:11:34.297641] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:57.295 [2024-11-27 14:11:34.297810] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:57.295 [2024-11-27 14:11:34.297933] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:11:57.295 [2024-11-27 14:11:34.298083] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:11:57.295 14:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.295 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:11:57.295 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:57.295 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:57.295 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.295 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.295 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:57.295 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.295 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.295 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.295 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.295 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.295 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.295 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.295 14:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.295 14:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.295 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.295 14:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.295 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.295 "name": "Existed_Raid", 00:11:57.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.295 "strip_size_kb": 64, 00:11:57.295 "state": "configuring", 00:11:57.295 "raid_level": "raid0", 00:11:57.295 "superblock": false, 00:11:57.295 "num_base_bdevs": 4, 00:11:57.295 "num_base_bdevs_discovered": 1, 00:11:57.295 "num_base_bdevs_operational": 4, 00:11:57.295 "base_bdevs_list": [ 00:11:57.295 { 00:11:57.295 "name": "BaseBdev1", 00:11:57.295 "uuid": "fcd14749-5d2f-40df-a391-abe7814e0f11", 00:11:57.295 "is_configured": true, 00:11:57.295 "data_offset": 0, 00:11:57.295 "data_size": 65536 00:11:57.295 }, 00:11:57.295 { 00:11:57.295 "name": "BaseBdev2", 00:11:57.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.295 "is_configured": false, 00:11:57.295 "data_offset": 0, 00:11:57.295 "data_size": 0 00:11:57.295 }, 00:11:57.295 { 00:11:57.295 "name": "BaseBdev3", 00:11:57.295 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.295 "is_configured": false, 00:11:57.295 "data_offset": 0, 00:11:57.295 "data_size": 0 00:11:57.295 }, 00:11:57.295 { 00:11:57.296 "name": "BaseBdev4", 00:11:57.296 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.296 "is_configured": false, 00:11:57.296 "data_offset": 0, 00:11:57.296 "data_size": 0 00:11:57.296 } 00:11:57.296 ] 00:11:57.296 }' 00:11:57.296 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.296 14:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.554 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:11:57.554 14:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.554 14:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.813 [2024-11-27 14:11:34.853720] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:57.813 BaseBdev2 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.813 [ 00:11:57.813 { 00:11:57.813 "name": "BaseBdev2", 00:11:57.813 "aliases": [ 00:11:57.813 "bde9da15-7135-4335-89ee-9e80aba3e07b" 00:11:57.813 ], 00:11:57.813 "product_name": "Malloc disk", 00:11:57.813 "block_size": 512, 00:11:57.813 "num_blocks": 65536, 00:11:57.813 "uuid": "bde9da15-7135-4335-89ee-9e80aba3e07b", 00:11:57.813 "assigned_rate_limits": { 00:11:57.813 "rw_ios_per_sec": 0, 00:11:57.813 "rw_mbytes_per_sec": 0, 00:11:57.813 "r_mbytes_per_sec": 0, 00:11:57.813 "w_mbytes_per_sec": 0 00:11:57.813 }, 00:11:57.813 "claimed": true, 00:11:57.813 "claim_type": "exclusive_write", 00:11:57.813 "zoned": false, 00:11:57.813 "supported_io_types": { 00:11:57.813 "read": true, 00:11:57.813 "write": true, 00:11:57.813 "unmap": true, 00:11:57.813 "flush": true, 00:11:57.813 "reset": true, 00:11:57.813 "nvme_admin": false, 00:11:57.813 "nvme_io": false, 00:11:57.813 "nvme_io_md": false, 00:11:57.813 "write_zeroes": true, 00:11:57.813 "zcopy": true, 00:11:57.813 "get_zone_info": false, 00:11:57.813 "zone_management": false, 00:11:57.813 "zone_append": false, 00:11:57.813 "compare": false, 00:11:57.813 "compare_and_write": false, 00:11:57.813 "abort": true, 00:11:57.813 "seek_hole": false, 00:11:57.813 "seek_data": false, 00:11:57.813 "copy": true, 00:11:57.813 "nvme_iov_md": false 00:11:57.813 }, 00:11:57.813 "memory_domains": [ 00:11:57.813 { 00:11:57.813 "dma_device_id": "system", 00:11:57.813 "dma_device_type": 1 00:11:57.813 }, 00:11:57.813 { 00:11:57.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:57.813 "dma_device_type": 2 00:11:57.813 } 00:11:57.813 ], 00:11:57.813 "driver_specific": {} 00:11:57.813 } 00:11:57.813 ] 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:57.813 "name": "Existed_Raid", 00:11:57.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.813 "strip_size_kb": 64, 00:11:57.813 "state": "configuring", 00:11:57.813 "raid_level": "raid0", 00:11:57.813 "superblock": false, 00:11:57.813 "num_base_bdevs": 4, 00:11:57.813 "num_base_bdevs_discovered": 2, 00:11:57.813 "num_base_bdevs_operational": 4, 00:11:57.813 "base_bdevs_list": [ 00:11:57.813 { 00:11:57.813 "name": "BaseBdev1", 00:11:57.813 "uuid": "fcd14749-5d2f-40df-a391-abe7814e0f11", 00:11:57.813 "is_configured": true, 00:11:57.813 "data_offset": 0, 00:11:57.813 "data_size": 65536 00:11:57.813 }, 00:11:57.813 { 00:11:57.813 "name": "BaseBdev2", 00:11:57.813 "uuid": "bde9da15-7135-4335-89ee-9e80aba3e07b", 00:11:57.813 "is_configured": true, 00:11:57.813 "data_offset": 0, 00:11:57.813 "data_size": 65536 00:11:57.813 }, 00:11:57.813 { 00:11:57.813 "name": "BaseBdev3", 00:11:57.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.813 "is_configured": false, 00:11:57.813 "data_offset": 0, 00:11:57.813 "data_size": 0 00:11:57.813 }, 00:11:57.813 { 00:11:57.813 "name": "BaseBdev4", 00:11:57.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.813 "is_configured": false, 00:11:57.813 "data_offset": 0, 00:11:57.813 "data_size": 0 00:11:57.813 } 00:11:57.813 ] 00:11:57.813 }' 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:57.813 14:11:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.380 14:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:11:58.380 14:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.380 14:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.380 [2024-11-27 14:11:35.478364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:58.380 BaseBdev3 00:11:58.380 14:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.380 14:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:11:58.380 14:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:11:58.380 14:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:58.380 14:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:58.380 14:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:58.380 14:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:58.380 14:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:58.380 14:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.381 14:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.381 14:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.381 14:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:58.381 14:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.381 14:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.381 [ 00:11:58.381 { 00:11:58.381 "name": "BaseBdev3", 00:11:58.381 "aliases": [ 00:11:58.381 "95699e5b-a480-4e75-9fa0-ea77912cd2ee" 00:11:58.381 ], 00:11:58.381 "product_name": "Malloc disk", 00:11:58.381 "block_size": 512, 00:11:58.381 "num_blocks": 65536, 00:11:58.381 "uuid": "95699e5b-a480-4e75-9fa0-ea77912cd2ee", 00:11:58.381 "assigned_rate_limits": { 00:11:58.381 "rw_ios_per_sec": 0, 00:11:58.381 "rw_mbytes_per_sec": 0, 00:11:58.381 "r_mbytes_per_sec": 0, 00:11:58.381 "w_mbytes_per_sec": 0 00:11:58.381 }, 00:11:58.381 "claimed": true, 00:11:58.381 "claim_type": "exclusive_write", 00:11:58.381 "zoned": false, 00:11:58.381 "supported_io_types": { 00:11:58.381 "read": true, 00:11:58.381 "write": true, 00:11:58.381 "unmap": true, 00:11:58.381 "flush": true, 00:11:58.381 "reset": true, 00:11:58.381 "nvme_admin": false, 00:11:58.381 "nvme_io": false, 00:11:58.381 "nvme_io_md": false, 00:11:58.381 "write_zeroes": true, 00:11:58.381 "zcopy": true, 00:11:58.381 "get_zone_info": false, 00:11:58.381 "zone_management": false, 00:11:58.381 "zone_append": false, 00:11:58.381 "compare": false, 00:11:58.381 "compare_and_write": false, 00:11:58.381 "abort": true, 00:11:58.381 "seek_hole": false, 00:11:58.381 "seek_data": false, 00:11:58.381 "copy": true, 00:11:58.381 "nvme_iov_md": false 00:11:58.381 }, 00:11:58.381 "memory_domains": [ 00:11:58.381 { 00:11:58.381 "dma_device_id": "system", 00:11:58.381 "dma_device_type": 1 00:11:58.381 }, 00:11:58.381 { 00:11:58.381 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.381 "dma_device_type": 2 00:11:58.381 } 00:11:58.381 ], 00:11:58.381 "driver_specific": {} 00:11:58.381 } 00:11:58.381 ] 00:11:58.381 14:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.381 14:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:58.381 14:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:58.381 14:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:58.381 14:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:11:58.381 14:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.381 14:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:11:58.381 14:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:58.381 14:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.381 14:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.381 14:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.381 14:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.381 14:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.381 14:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.381 14:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.381 14:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.381 14:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.381 14:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.381 14:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.381 14:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.381 "name": "Existed_Raid", 00:11:58.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.381 "strip_size_kb": 64, 00:11:58.381 "state": "configuring", 00:11:58.381 "raid_level": "raid0", 00:11:58.381 "superblock": false, 00:11:58.381 "num_base_bdevs": 4, 00:11:58.381 "num_base_bdevs_discovered": 3, 00:11:58.381 "num_base_bdevs_operational": 4, 00:11:58.381 "base_bdevs_list": [ 00:11:58.381 { 00:11:58.381 "name": "BaseBdev1", 00:11:58.381 "uuid": "fcd14749-5d2f-40df-a391-abe7814e0f11", 00:11:58.381 "is_configured": true, 00:11:58.381 "data_offset": 0, 00:11:58.381 "data_size": 65536 00:11:58.381 }, 00:11:58.381 { 00:11:58.381 "name": "BaseBdev2", 00:11:58.381 "uuid": "bde9da15-7135-4335-89ee-9e80aba3e07b", 00:11:58.381 "is_configured": true, 00:11:58.381 "data_offset": 0, 00:11:58.381 "data_size": 65536 00:11:58.381 }, 00:11:58.381 { 00:11:58.381 "name": "BaseBdev3", 00:11:58.381 "uuid": "95699e5b-a480-4e75-9fa0-ea77912cd2ee", 00:11:58.381 "is_configured": true, 00:11:58.381 "data_offset": 0, 00:11:58.381 "data_size": 65536 00:11:58.381 }, 00:11:58.381 { 00:11:58.381 "name": "BaseBdev4", 00:11:58.381 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.381 "is_configured": false, 00:11:58.381 "data_offset": 0, 00:11:58.381 "data_size": 0 00:11:58.381 } 00:11:58.381 ] 00:11:58.381 }' 00:11:58.381 14:11:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.381 14:11:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.948 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:11:58.948 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.948 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.948 [2024-11-27 14:11:36.069158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:11:58.948 BaseBdev4 00:11:58.948 [2024-11-27 14:11:36.069398] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:11:58.948 [2024-11-27 14:11:36.069425] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:11:58.948 [2024-11-27 14:11:36.069811] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:11:58.948 [2024-11-27 14:11:36.070026] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:11:58.948 [2024-11-27 14:11:36.070048] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:11:58.948 [2024-11-27 14:11:36.070367] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:58.948 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.948 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:11:58.948 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:11:58.948 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:11:58.948 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:11:58.948 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:11:58.949 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:11:58.949 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:11:58.949 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.949 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.949 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.949 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:11:58.949 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.949 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.949 [ 00:11:58.949 { 00:11:58.949 "name": "BaseBdev4", 00:11:58.949 "aliases": [ 00:11:58.949 "3b7850c6-e3c7-49ec-b5eb-88ad79ba7262" 00:11:58.949 ], 00:11:58.949 "product_name": "Malloc disk", 00:11:58.949 "block_size": 512, 00:11:58.949 "num_blocks": 65536, 00:11:58.949 "uuid": "3b7850c6-e3c7-49ec-b5eb-88ad79ba7262", 00:11:58.949 "assigned_rate_limits": { 00:11:58.949 "rw_ios_per_sec": 0, 00:11:58.949 "rw_mbytes_per_sec": 0, 00:11:58.949 "r_mbytes_per_sec": 0, 00:11:58.949 "w_mbytes_per_sec": 0 00:11:58.949 }, 00:11:58.949 "claimed": true, 00:11:58.949 "claim_type": "exclusive_write", 00:11:58.949 "zoned": false, 00:11:58.949 "supported_io_types": { 00:11:58.949 "read": true, 00:11:58.949 "write": true, 00:11:58.949 "unmap": true, 00:11:58.949 "flush": true, 00:11:58.949 "reset": true, 00:11:58.949 "nvme_admin": false, 00:11:58.949 "nvme_io": false, 00:11:58.949 "nvme_io_md": false, 00:11:58.949 "write_zeroes": true, 00:11:58.949 "zcopy": true, 00:11:58.949 "get_zone_info": false, 00:11:58.949 "zone_management": false, 00:11:58.949 "zone_append": false, 00:11:58.949 "compare": false, 00:11:58.949 "compare_and_write": false, 00:11:58.949 "abort": true, 00:11:58.949 "seek_hole": false, 00:11:58.949 "seek_data": false, 00:11:58.949 "copy": true, 00:11:58.949 "nvme_iov_md": false 00:11:58.949 }, 00:11:58.949 "memory_domains": [ 00:11:58.949 { 00:11:58.949 "dma_device_id": "system", 00:11:58.949 "dma_device_type": 1 00:11:58.949 }, 00:11:58.949 { 00:11:58.949 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.949 "dma_device_type": 2 00:11:58.949 } 00:11:58.949 ], 00:11:58.949 "driver_specific": {} 00:11:58.949 } 00:11:58.949 ] 00:11:58.949 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.949 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:11:58.949 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:11:58.949 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:11:58.949 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:11:58.949 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:11:58.949 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:11:58.949 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:11:58.949 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:11:58.949 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:11:58.949 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:11:58.949 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:11:58.949 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:11:58.949 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:11:58.949 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:11:58.949 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:58.949 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:58.949 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.949 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:58.949 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:11:58.949 "name": "Existed_Raid", 00:11:58.949 "uuid": "7efbd1ef-aa47-4592-89cb-979a08d7c55e", 00:11:58.949 "strip_size_kb": 64, 00:11:58.949 "state": "online", 00:11:58.949 "raid_level": "raid0", 00:11:58.949 "superblock": false, 00:11:58.949 "num_base_bdevs": 4, 00:11:58.949 "num_base_bdevs_discovered": 4, 00:11:58.949 "num_base_bdevs_operational": 4, 00:11:58.949 "base_bdevs_list": [ 00:11:58.949 { 00:11:58.949 "name": "BaseBdev1", 00:11:58.949 "uuid": "fcd14749-5d2f-40df-a391-abe7814e0f11", 00:11:58.949 "is_configured": true, 00:11:58.949 "data_offset": 0, 00:11:58.949 "data_size": 65536 00:11:58.949 }, 00:11:58.949 { 00:11:58.949 "name": "BaseBdev2", 00:11:58.949 "uuid": "bde9da15-7135-4335-89ee-9e80aba3e07b", 00:11:58.949 "is_configured": true, 00:11:58.949 "data_offset": 0, 00:11:58.949 "data_size": 65536 00:11:58.949 }, 00:11:58.949 { 00:11:58.949 "name": "BaseBdev3", 00:11:58.949 "uuid": "95699e5b-a480-4e75-9fa0-ea77912cd2ee", 00:11:58.949 "is_configured": true, 00:11:58.949 "data_offset": 0, 00:11:58.949 "data_size": 65536 00:11:58.949 }, 00:11:58.949 { 00:11:58.949 "name": "BaseBdev4", 00:11:58.949 "uuid": "3b7850c6-e3c7-49ec-b5eb-88ad79ba7262", 00:11:58.949 "is_configured": true, 00:11:58.949 "data_offset": 0, 00:11:58.949 "data_size": 65536 00:11:58.949 } 00:11:58.949 ] 00:11:58.949 }' 00:11:58.949 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:11:58.949 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.517 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:11:59.517 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:11:59.517 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:11:59.517 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:11:59.517 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:11:59.517 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:11:59.517 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:11:59.517 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.517 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.517 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:11:59.517 [2024-11-27 14:11:36.613835] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:59.517 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.517 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:11:59.517 "name": "Existed_Raid", 00:11:59.517 "aliases": [ 00:11:59.517 "7efbd1ef-aa47-4592-89cb-979a08d7c55e" 00:11:59.517 ], 00:11:59.517 "product_name": "Raid Volume", 00:11:59.517 "block_size": 512, 00:11:59.517 "num_blocks": 262144, 00:11:59.517 "uuid": "7efbd1ef-aa47-4592-89cb-979a08d7c55e", 00:11:59.518 "assigned_rate_limits": { 00:11:59.518 "rw_ios_per_sec": 0, 00:11:59.518 "rw_mbytes_per_sec": 0, 00:11:59.518 "r_mbytes_per_sec": 0, 00:11:59.518 "w_mbytes_per_sec": 0 00:11:59.518 }, 00:11:59.518 "claimed": false, 00:11:59.518 "zoned": false, 00:11:59.518 "supported_io_types": { 00:11:59.518 "read": true, 00:11:59.518 "write": true, 00:11:59.518 "unmap": true, 00:11:59.518 "flush": true, 00:11:59.518 "reset": true, 00:11:59.518 "nvme_admin": false, 00:11:59.518 "nvme_io": false, 00:11:59.518 "nvme_io_md": false, 00:11:59.518 "write_zeroes": true, 00:11:59.518 "zcopy": false, 00:11:59.518 "get_zone_info": false, 00:11:59.518 "zone_management": false, 00:11:59.518 "zone_append": false, 00:11:59.518 "compare": false, 00:11:59.518 "compare_and_write": false, 00:11:59.518 "abort": false, 00:11:59.518 "seek_hole": false, 00:11:59.518 "seek_data": false, 00:11:59.518 "copy": false, 00:11:59.518 "nvme_iov_md": false 00:11:59.518 }, 00:11:59.518 "memory_domains": [ 00:11:59.518 { 00:11:59.518 "dma_device_id": "system", 00:11:59.518 "dma_device_type": 1 00:11:59.518 }, 00:11:59.518 { 00:11:59.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.518 "dma_device_type": 2 00:11:59.518 }, 00:11:59.518 { 00:11:59.518 "dma_device_id": "system", 00:11:59.518 "dma_device_type": 1 00:11:59.518 }, 00:11:59.518 { 00:11:59.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.518 "dma_device_type": 2 00:11:59.518 }, 00:11:59.518 { 00:11:59.518 "dma_device_id": "system", 00:11:59.518 "dma_device_type": 1 00:11:59.518 }, 00:11:59.518 { 00:11:59.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.518 "dma_device_type": 2 00:11:59.518 }, 00:11:59.518 { 00:11:59.518 "dma_device_id": "system", 00:11:59.518 "dma_device_type": 1 00:11:59.518 }, 00:11:59.518 { 00:11:59.518 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:59.518 "dma_device_type": 2 00:11:59.518 } 00:11:59.518 ], 00:11:59.518 "driver_specific": { 00:11:59.518 "raid": { 00:11:59.518 "uuid": "7efbd1ef-aa47-4592-89cb-979a08d7c55e", 00:11:59.518 "strip_size_kb": 64, 00:11:59.518 "state": "online", 00:11:59.518 "raid_level": "raid0", 00:11:59.518 "superblock": false, 00:11:59.518 "num_base_bdevs": 4, 00:11:59.518 "num_base_bdevs_discovered": 4, 00:11:59.518 "num_base_bdevs_operational": 4, 00:11:59.518 "base_bdevs_list": [ 00:11:59.518 { 00:11:59.518 "name": "BaseBdev1", 00:11:59.518 "uuid": "fcd14749-5d2f-40df-a391-abe7814e0f11", 00:11:59.518 "is_configured": true, 00:11:59.518 "data_offset": 0, 00:11:59.518 "data_size": 65536 00:11:59.518 }, 00:11:59.518 { 00:11:59.518 "name": "BaseBdev2", 00:11:59.518 "uuid": "bde9da15-7135-4335-89ee-9e80aba3e07b", 00:11:59.518 "is_configured": true, 00:11:59.518 "data_offset": 0, 00:11:59.518 "data_size": 65536 00:11:59.518 }, 00:11:59.518 { 00:11:59.518 "name": "BaseBdev3", 00:11:59.518 "uuid": "95699e5b-a480-4e75-9fa0-ea77912cd2ee", 00:11:59.518 "is_configured": true, 00:11:59.518 "data_offset": 0, 00:11:59.518 "data_size": 65536 00:11:59.518 }, 00:11:59.518 { 00:11:59.518 "name": "BaseBdev4", 00:11:59.518 "uuid": "3b7850c6-e3c7-49ec-b5eb-88ad79ba7262", 00:11:59.518 "is_configured": true, 00:11:59.518 "data_offset": 0, 00:11:59.518 "data_size": 65536 00:11:59.518 } 00:11:59.518 ] 00:11:59.518 } 00:11:59.518 } 00:11:59.518 }' 00:11:59.518 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:59.518 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:11:59.518 BaseBdev2 00:11:59.518 BaseBdev3 00:11:59.518 BaseBdev4' 00:11:59.518 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.518 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:11:59.518 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.518 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:11:59.518 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.518 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.518 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.518 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.790 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.790 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.790 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.790 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.790 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:11:59.790 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.790 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.790 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.790 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.790 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.790 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.790 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:11:59.790 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.790 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.790 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.790 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.790 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.790 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.790 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:11:59.790 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:11:59.790 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:11:59.790 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.790 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.790 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.790 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:11:59.790 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:11:59.790 14:11:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:11:59.790 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.790 14:11:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:59.790 [2024-11-27 14:11:36.977547] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:59.790 [2024-11-27 14:11:36.977729] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:59.790 [2024-11-27 14:11:36.977920] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:00.071 14:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.071 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:00.071 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:00.071 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:00.071 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:00.071 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:00.071 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:00.071 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:00.071 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:00.071 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:00.071 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:00.071 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:00.071 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:00.071 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:00.071 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:00.071 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:00.071 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.071 14:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.071 14:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.071 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:00.071 14:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.071 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:00.071 "name": "Existed_Raid", 00:12:00.071 "uuid": "7efbd1ef-aa47-4592-89cb-979a08d7c55e", 00:12:00.071 "strip_size_kb": 64, 00:12:00.071 "state": "offline", 00:12:00.071 "raid_level": "raid0", 00:12:00.071 "superblock": false, 00:12:00.071 "num_base_bdevs": 4, 00:12:00.071 "num_base_bdevs_discovered": 3, 00:12:00.071 "num_base_bdevs_operational": 3, 00:12:00.071 "base_bdevs_list": [ 00:12:00.071 { 00:12:00.071 "name": null, 00:12:00.071 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:00.071 "is_configured": false, 00:12:00.071 "data_offset": 0, 00:12:00.071 "data_size": 65536 00:12:00.071 }, 00:12:00.071 { 00:12:00.071 "name": "BaseBdev2", 00:12:00.071 "uuid": "bde9da15-7135-4335-89ee-9e80aba3e07b", 00:12:00.071 "is_configured": true, 00:12:00.071 "data_offset": 0, 00:12:00.071 "data_size": 65536 00:12:00.071 }, 00:12:00.071 { 00:12:00.071 "name": "BaseBdev3", 00:12:00.071 "uuid": "95699e5b-a480-4e75-9fa0-ea77912cd2ee", 00:12:00.071 "is_configured": true, 00:12:00.071 "data_offset": 0, 00:12:00.071 "data_size": 65536 00:12:00.071 }, 00:12:00.071 { 00:12:00.071 "name": "BaseBdev4", 00:12:00.071 "uuid": "3b7850c6-e3c7-49ec-b5eb-88ad79ba7262", 00:12:00.071 "is_configured": true, 00:12:00.071 "data_offset": 0, 00:12:00.071 "data_size": 65536 00:12:00.071 } 00:12:00.071 ] 00:12:00.071 }' 00:12:00.071 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:00.071 14:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.330 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:00.330 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:00.330 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.330 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:00.330 14:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.330 14:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.330 14:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.589 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:00.589 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:00.589 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:00.589 14:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.589 14:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.589 [2024-11-27 14:11:37.626210] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:00.589 14:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.589 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:00.589 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:00.589 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:00.589 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.589 14:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.589 14:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.589 14:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.589 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:00.589 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:00.589 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:00.589 14:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.589 14:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.589 [2024-11-27 14:11:37.778694] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:00.849 14:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.849 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:00.849 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:00.849 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.849 14:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.849 14:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.849 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:00.849 14:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.849 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:00.849 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:00.849 14:11:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:00.849 14:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.849 14:11:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.849 [2024-11-27 14:11:37.927697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:00.849 [2024-11-27 14:11:37.927934] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:00.849 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.849 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:00.849 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:00.849 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:00.849 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:00.849 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.849 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.849 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.849 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:00.849 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:00.849 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:00.849 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:00.849 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:00.849 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:00.849 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.849 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:00.849 BaseBdev2 00:12:00.849 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:00.849 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:00.849 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:00.849 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:00.849 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:00.849 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:00.849 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:00.849 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:00.849 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:00.849 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.109 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.109 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:01.109 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.109 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.109 [ 00:12:01.109 { 00:12:01.109 "name": "BaseBdev2", 00:12:01.109 "aliases": [ 00:12:01.109 "29267839-d490-449b-91fb-7685153e1a19" 00:12:01.109 ], 00:12:01.109 "product_name": "Malloc disk", 00:12:01.109 "block_size": 512, 00:12:01.109 "num_blocks": 65536, 00:12:01.109 "uuid": "29267839-d490-449b-91fb-7685153e1a19", 00:12:01.109 "assigned_rate_limits": { 00:12:01.109 "rw_ios_per_sec": 0, 00:12:01.109 "rw_mbytes_per_sec": 0, 00:12:01.109 "r_mbytes_per_sec": 0, 00:12:01.109 "w_mbytes_per_sec": 0 00:12:01.109 }, 00:12:01.109 "claimed": false, 00:12:01.109 "zoned": false, 00:12:01.109 "supported_io_types": { 00:12:01.109 "read": true, 00:12:01.109 "write": true, 00:12:01.109 "unmap": true, 00:12:01.109 "flush": true, 00:12:01.109 "reset": true, 00:12:01.109 "nvme_admin": false, 00:12:01.109 "nvme_io": false, 00:12:01.109 "nvme_io_md": false, 00:12:01.109 "write_zeroes": true, 00:12:01.109 "zcopy": true, 00:12:01.109 "get_zone_info": false, 00:12:01.109 "zone_management": false, 00:12:01.109 "zone_append": false, 00:12:01.109 "compare": false, 00:12:01.109 "compare_and_write": false, 00:12:01.109 "abort": true, 00:12:01.109 "seek_hole": false, 00:12:01.109 "seek_data": false, 00:12:01.109 "copy": true, 00:12:01.109 "nvme_iov_md": false 00:12:01.109 }, 00:12:01.109 "memory_domains": [ 00:12:01.109 { 00:12:01.109 "dma_device_id": "system", 00:12:01.109 "dma_device_type": 1 00:12:01.109 }, 00:12:01.109 { 00:12:01.109 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.109 "dma_device_type": 2 00:12:01.109 } 00:12:01.109 ], 00:12:01.109 "driver_specific": {} 00:12:01.109 } 00:12:01.109 ] 00:12:01.109 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.109 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:01.109 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:01.109 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:01.109 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:01.109 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.109 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.109 BaseBdev3 00:12:01.109 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.110 [ 00:12:01.110 { 00:12:01.110 "name": "BaseBdev3", 00:12:01.110 "aliases": [ 00:12:01.110 "0f755b77-2d3d-4b93-ab53-b86230c39b8b" 00:12:01.110 ], 00:12:01.110 "product_name": "Malloc disk", 00:12:01.110 "block_size": 512, 00:12:01.110 "num_blocks": 65536, 00:12:01.110 "uuid": "0f755b77-2d3d-4b93-ab53-b86230c39b8b", 00:12:01.110 "assigned_rate_limits": { 00:12:01.110 "rw_ios_per_sec": 0, 00:12:01.110 "rw_mbytes_per_sec": 0, 00:12:01.110 "r_mbytes_per_sec": 0, 00:12:01.110 "w_mbytes_per_sec": 0 00:12:01.110 }, 00:12:01.110 "claimed": false, 00:12:01.110 "zoned": false, 00:12:01.110 "supported_io_types": { 00:12:01.110 "read": true, 00:12:01.110 "write": true, 00:12:01.110 "unmap": true, 00:12:01.110 "flush": true, 00:12:01.110 "reset": true, 00:12:01.110 "nvme_admin": false, 00:12:01.110 "nvme_io": false, 00:12:01.110 "nvme_io_md": false, 00:12:01.110 "write_zeroes": true, 00:12:01.110 "zcopy": true, 00:12:01.110 "get_zone_info": false, 00:12:01.110 "zone_management": false, 00:12:01.110 "zone_append": false, 00:12:01.110 "compare": false, 00:12:01.110 "compare_and_write": false, 00:12:01.110 "abort": true, 00:12:01.110 "seek_hole": false, 00:12:01.110 "seek_data": false, 00:12:01.110 "copy": true, 00:12:01.110 "nvme_iov_md": false 00:12:01.110 }, 00:12:01.110 "memory_domains": [ 00:12:01.110 { 00:12:01.110 "dma_device_id": "system", 00:12:01.110 "dma_device_type": 1 00:12:01.110 }, 00:12:01.110 { 00:12:01.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.110 "dma_device_type": 2 00:12:01.110 } 00:12:01.110 ], 00:12:01.110 "driver_specific": {} 00:12:01.110 } 00:12:01.110 ] 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.110 BaseBdev4 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.110 [ 00:12:01.110 { 00:12:01.110 "name": "BaseBdev4", 00:12:01.110 "aliases": [ 00:12:01.110 "0e5cbda2-e5b1-4228-8754-dbfe00e17b94" 00:12:01.110 ], 00:12:01.110 "product_name": "Malloc disk", 00:12:01.110 "block_size": 512, 00:12:01.110 "num_blocks": 65536, 00:12:01.110 "uuid": "0e5cbda2-e5b1-4228-8754-dbfe00e17b94", 00:12:01.110 "assigned_rate_limits": { 00:12:01.110 "rw_ios_per_sec": 0, 00:12:01.110 "rw_mbytes_per_sec": 0, 00:12:01.110 "r_mbytes_per_sec": 0, 00:12:01.110 "w_mbytes_per_sec": 0 00:12:01.110 }, 00:12:01.110 "claimed": false, 00:12:01.110 "zoned": false, 00:12:01.110 "supported_io_types": { 00:12:01.110 "read": true, 00:12:01.110 "write": true, 00:12:01.110 "unmap": true, 00:12:01.110 "flush": true, 00:12:01.110 "reset": true, 00:12:01.110 "nvme_admin": false, 00:12:01.110 "nvme_io": false, 00:12:01.110 "nvme_io_md": false, 00:12:01.110 "write_zeroes": true, 00:12:01.110 "zcopy": true, 00:12:01.110 "get_zone_info": false, 00:12:01.110 "zone_management": false, 00:12:01.110 "zone_append": false, 00:12:01.110 "compare": false, 00:12:01.110 "compare_and_write": false, 00:12:01.110 "abort": true, 00:12:01.110 "seek_hole": false, 00:12:01.110 "seek_data": false, 00:12:01.110 "copy": true, 00:12:01.110 "nvme_iov_md": false 00:12:01.110 }, 00:12:01.110 "memory_domains": [ 00:12:01.110 { 00:12:01.110 "dma_device_id": "system", 00:12:01.110 "dma_device_type": 1 00:12:01.110 }, 00:12:01.110 { 00:12:01.110 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:01.110 "dma_device_type": 2 00:12:01.110 } 00:12:01.110 ], 00:12:01.110 "driver_specific": {} 00:12:01.110 } 00:12:01.110 ] 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.110 [2024-11-27 14:11:38.301338] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:01.110 [2024-11-27 14:11:38.301528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:01.110 [2024-11-27 14:11:38.301677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:01.110 [2024-11-27 14:11:38.304401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:01.110 [2024-11-27 14:11:38.304597] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.110 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.110 "name": "Existed_Raid", 00:12:01.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.110 "strip_size_kb": 64, 00:12:01.110 "state": "configuring", 00:12:01.110 "raid_level": "raid0", 00:12:01.110 "superblock": false, 00:12:01.110 "num_base_bdevs": 4, 00:12:01.110 "num_base_bdevs_discovered": 3, 00:12:01.110 "num_base_bdevs_operational": 4, 00:12:01.110 "base_bdevs_list": [ 00:12:01.110 { 00:12:01.110 "name": "BaseBdev1", 00:12:01.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.110 "is_configured": false, 00:12:01.110 "data_offset": 0, 00:12:01.110 "data_size": 0 00:12:01.110 }, 00:12:01.110 { 00:12:01.110 "name": "BaseBdev2", 00:12:01.110 "uuid": "29267839-d490-449b-91fb-7685153e1a19", 00:12:01.111 "is_configured": true, 00:12:01.111 "data_offset": 0, 00:12:01.111 "data_size": 65536 00:12:01.111 }, 00:12:01.111 { 00:12:01.111 "name": "BaseBdev3", 00:12:01.111 "uuid": "0f755b77-2d3d-4b93-ab53-b86230c39b8b", 00:12:01.111 "is_configured": true, 00:12:01.111 "data_offset": 0, 00:12:01.111 "data_size": 65536 00:12:01.111 }, 00:12:01.111 { 00:12:01.111 "name": "BaseBdev4", 00:12:01.111 "uuid": "0e5cbda2-e5b1-4228-8754-dbfe00e17b94", 00:12:01.111 "is_configured": true, 00:12:01.111 "data_offset": 0, 00:12:01.111 "data_size": 65536 00:12:01.111 } 00:12:01.111 ] 00:12:01.111 }' 00:12:01.111 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.111 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.678 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:01.678 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.678 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.678 [2024-11-27 14:11:38.845535] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:01.678 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.678 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:01.678 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:01.678 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:01.678 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:01.678 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:01.678 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:01.678 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:01.678 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:01.678 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:01.678 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:01.678 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:01.678 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.678 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:01.678 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:01.678 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:01.678 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:01.678 "name": "Existed_Raid", 00:12:01.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.678 "strip_size_kb": 64, 00:12:01.678 "state": "configuring", 00:12:01.678 "raid_level": "raid0", 00:12:01.678 "superblock": false, 00:12:01.678 "num_base_bdevs": 4, 00:12:01.678 "num_base_bdevs_discovered": 2, 00:12:01.678 "num_base_bdevs_operational": 4, 00:12:01.678 "base_bdevs_list": [ 00:12:01.678 { 00:12:01.678 "name": "BaseBdev1", 00:12:01.678 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.678 "is_configured": false, 00:12:01.678 "data_offset": 0, 00:12:01.678 "data_size": 0 00:12:01.678 }, 00:12:01.678 { 00:12:01.678 "name": null, 00:12:01.678 "uuid": "29267839-d490-449b-91fb-7685153e1a19", 00:12:01.678 "is_configured": false, 00:12:01.678 "data_offset": 0, 00:12:01.678 "data_size": 65536 00:12:01.678 }, 00:12:01.678 { 00:12:01.678 "name": "BaseBdev3", 00:12:01.678 "uuid": "0f755b77-2d3d-4b93-ab53-b86230c39b8b", 00:12:01.678 "is_configured": true, 00:12:01.678 "data_offset": 0, 00:12:01.678 "data_size": 65536 00:12:01.678 }, 00:12:01.678 { 00:12:01.678 "name": "BaseBdev4", 00:12:01.678 "uuid": "0e5cbda2-e5b1-4228-8754-dbfe00e17b94", 00:12:01.678 "is_configured": true, 00:12:01.678 "data_offset": 0, 00:12:01.678 "data_size": 65536 00:12:01.678 } 00:12:01.678 ] 00:12:01.678 }' 00:12:01.678 14:11:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:01.678 14:11:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.244 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.244 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.244 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:02.244 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.244 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.244 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:02.244 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:02.244 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.244 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.244 [2024-11-27 14:11:39.488752] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:02.244 BaseBdev1 00:12:02.244 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.244 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:02.244 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:02.244 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:02.244 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:02.244 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:02.244 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:02.244 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:02.244 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.244 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.244 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.244 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:02.244 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.244 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.244 [ 00:12:02.244 { 00:12:02.244 "name": "BaseBdev1", 00:12:02.244 "aliases": [ 00:12:02.244 "dc808209-da56-480c-baea-aca35c530f3c" 00:12:02.244 ], 00:12:02.244 "product_name": "Malloc disk", 00:12:02.244 "block_size": 512, 00:12:02.244 "num_blocks": 65536, 00:12:02.244 "uuid": "dc808209-da56-480c-baea-aca35c530f3c", 00:12:02.244 "assigned_rate_limits": { 00:12:02.244 "rw_ios_per_sec": 0, 00:12:02.244 "rw_mbytes_per_sec": 0, 00:12:02.244 "r_mbytes_per_sec": 0, 00:12:02.244 "w_mbytes_per_sec": 0 00:12:02.244 }, 00:12:02.244 "claimed": true, 00:12:02.244 "claim_type": "exclusive_write", 00:12:02.244 "zoned": false, 00:12:02.244 "supported_io_types": { 00:12:02.244 "read": true, 00:12:02.244 "write": true, 00:12:02.244 "unmap": true, 00:12:02.244 "flush": true, 00:12:02.244 "reset": true, 00:12:02.244 "nvme_admin": false, 00:12:02.244 "nvme_io": false, 00:12:02.244 "nvme_io_md": false, 00:12:02.244 "write_zeroes": true, 00:12:02.244 "zcopy": true, 00:12:02.244 "get_zone_info": false, 00:12:02.244 "zone_management": false, 00:12:02.244 "zone_append": false, 00:12:02.244 "compare": false, 00:12:02.244 "compare_and_write": false, 00:12:02.244 "abort": true, 00:12:02.244 "seek_hole": false, 00:12:02.244 "seek_data": false, 00:12:02.244 "copy": true, 00:12:02.244 "nvme_iov_md": false 00:12:02.244 }, 00:12:02.244 "memory_domains": [ 00:12:02.244 { 00:12:02.244 "dma_device_id": "system", 00:12:02.244 "dma_device_type": 1 00:12:02.244 }, 00:12:02.244 { 00:12:02.244 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.244 "dma_device_type": 2 00:12:02.244 } 00:12:02.244 ], 00:12:02.244 "driver_specific": {} 00:12:02.244 } 00:12:02.244 ] 00:12:02.244 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.244 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:02.244 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:02.244 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:02.244 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:02.244 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:02.244 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:02.244 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:02.244 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:02.502 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:02.502 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:02.502 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:02.502 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.502 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.502 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.502 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.502 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:02.502 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:02.502 "name": "Existed_Raid", 00:12:02.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:02.502 "strip_size_kb": 64, 00:12:02.502 "state": "configuring", 00:12:02.502 "raid_level": "raid0", 00:12:02.502 "superblock": false, 00:12:02.502 "num_base_bdevs": 4, 00:12:02.502 "num_base_bdevs_discovered": 3, 00:12:02.502 "num_base_bdevs_operational": 4, 00:12:02.502 "base_bdevs_list": [ 00:12:02.502 { 00:12:02.502 "name": "BaseBdev1", 00:12:02.502 "uuid": "dc808209-da56-480c-baea-aca35c530f3c", 00:12:02.502 "is_configured": true, 00:12:02.502 "data_offset": 0, 00:12:02.502 "data_size": 65536 00:12:02.502 }, 00:12:02.502 { 00:12:02.502 "name": null, 00:12:02.502 "uuid": "29267839-d490-449b-91fb-7685153e1a19", 00:12:02.502 "is_configured": false, 00:12:02.502 "data_offset": 0, 00:12:02.502 "data_size": 65536 00:12:02.502 }, 00:12:02.502 { 00:12:02.502 "name": "BaseBdev3", 00:12:02.502 "uuid": "0f755b77-2d3d-4b93-ab53-b86230c39b8b", 00:12:02.502 "is_configured": true, 00:12:02.502 "data_offset": 0, 00:12:02.502 "data_size": 65536 00:12:02.502 }, 00:12:02.502 { 00:12:02.502 "name": "BaseBdev4", 00:12:02.502 "uuid": "0e5cbda2-e5b1-4228-8754-dbfe00e17b94", 00:12:02.502 "is_configured": true, 00:12:02.502 "data_offset": 0, 00:12:02.502 "data_size": 65536 00:12:02.502 } 00:12:02.502 ] 00:12:02.502 }' 00:12:02.502 14:11:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:02.502 14:11:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:02.760 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:02.760 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:02.760 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:02.760 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.019 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.019 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:03.019 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:03.020 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.020 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.020 [2024-11-27 14:11:40.093121] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:03.020 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.020 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:03.020 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.020 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.020 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:03.020 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.020 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.020 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.020 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.020 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.020 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.020 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.020 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.020 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.020 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.020 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.020 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.020 "name": "Existed_Raid", 00:12:03.020 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.020 "strip_size_kb": 64, 00:12:03.020 "state": "configuring", 00:12:03.020 "raid_level": "raid0", 00:12:03.020 "superblock": false, 00:12:03.020 "num_base_bdevs": 4, 00:12:03.020 "num_base_bdevs_discovered": 2, 00:12:03.020 "num_base_bdevs_operational": 4, 00:12:03.020 "base_bdevs_list": [ 00:12:03.020 { 00:12:03.020 "name": "BaseBdev1", 00:12:03.020 "uuid": "dc808209-da56-480c-baea-aca35c530f3c", 00:12:03.020 "is_configured": true, 00:12:03.020 "data_offset": 0, 00:12:03.020 "data_size": 65536 00:12:03.020 }, 00:12:03.020 { 00:12:03.020 "name": null, 00:12:03.020 "uuid": "29267839-d490-449b-91fb-7685153e1a19", 00:12:03.020 "is_configured": false, 00:12:03.020 "data_offset": 0, 00:12:03.020 "data_size": 65536 00:12:03.020 }, 00:12:03.020 { 00:12:03.020 "name": null, 00:12:03.020 "uuid": "0f755b77-2d3d-4b93-ab53-b86230c39b8b", 00:12:03.020 "is_configured": false, 00:12:03.020 "data_offset": 0, 00:12:03.020 "data_size": 65536 00:12:03.020 }, 00:12:03.020 { 00:12:03.020 "name": "BaseBdev4", 00:12:03.020 "uuid": "0e5cbda2-e5b1-4228-8754-dbfe00e17b94", 00:12:03.020 "is_configured": true, 00:12:03.020 "data_offset": 0, 00:12:03.020 "data_size": 65536 00:12:03.020 } 00:12:03.020 ] 00:12:03.020 }' 00:12:03.020 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.020 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.588 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:03.588 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.588 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.588 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.588 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.588 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:03.588 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:03.588 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.588 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.588 [2024-11-27 14:11:40.697241] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:03.588 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.588 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:03.588 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:03.588 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:03.588 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:03.588 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:03.588 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:03.588 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:03.588 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:03.588 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:03.588 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:03.588 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:03.588 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:03.588 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:03.588 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:03.588 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:03.588 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:03.588 "name": "Existed_Raid", 00:12:03.588 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:03.588 "strip_size_kb": 64, 00:12:03.588 "state": "configuring", 00:12:03.588 "raid_level": "raid0", 00:12:03.589 "superblock": false, 00:12:03.589 "num_base_bdevs": 4, 00:12:03.589 "num_base_bdevs_discovered": 3, 00:12:03.589 "num_base_bdevs_operational": 4, 00:12:03.589 "base_bdevs_list": [ 00:12:03.589 { 00:12:03.589 "name": "BaseBdev1", 00:12:03.589 "uuid": "dc808209-da56-480c-baea-aca35c530f3c", 00:12:03.589 "is_configured": true, 00:12:03.589 "data_offset": 0, 00:12:03.589 "data_size": 65536 00:12:03.589 }, 00:12:03.589 { 00:12:03.589 "name": null, 00:12:03.589 "uuid": "29267839-d490-449b-91fb-7685153e1a19", 00:12:03.589 "is_configured": false, 00:12:03.589 "data_offset": 0, 00:12:03.589 "data_size": 65536 00:12:03.589 }, 00:12:03.589 { 00:12:03.589 "name": "BaseBdev3", 00:12:03.589 "uuid": "0f755b77-2d3d-4b93-ab53-b86230c39b8b", 00:12:03.589 "is_configured": true, 00:12:03.589 "data_offset": 0, 00:12:03.589 "data_size": 65536 00:12:03.589 }, 00:12:03.589 { 00:12:03.589 "name": "BaseBdev4", 00:12:03.589 "uuid": "0e5cbda2-e5b1-4228-8754-dbfe00e17b94", 00:12:03.589 "is_configured": true, 00:12:03.589 "data_offset": 0, 00:12:03.589 "data_size": 65536 00:12:03.589 } 00:12:03.589 ] 00:12:03.589 }' 00:12:03.589 14:11:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:03.589 14:11:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.154 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:04.154 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.154 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.154 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.154 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.154 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:04.154 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:04.154 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.154 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.154 [2024-11-27 14:11:41.261465] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:04.154 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.154 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:04.154 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.154 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.154 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:04.154 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.154 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.154 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.154 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.154 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.154 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.154 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.154 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.154 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.154 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.154 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.154 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.154 "name": "Existed_Raid", 00:12:04.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.154 "strip_size_kb": 64, 00:12:04.154 "state": "configuring", 00:12:04.154 "raid_level": "raid0", 00:12:04.154 "superblock": false, 00:12:04.154 "num_base_bdevs": 4, 00:12:04.154 "num_base_bdevs_discovered": 2, 00:12:04.154 "num_base_bdevs_operational": 4, 00:12:04.154 "base_bdevs_list": [ 00:12:04.154 { 00:12:04.154 "name": null, 00:12:04.154 "uuid": "dc808209-da56-480c-baea-aca35c530f3c", 00:12:04.154 "is_configured": false, 00:12:04.154 "data_offset": 0, 00:12:04.154 "data_size": 65536 00:12:04.154 }, 00:12:04.154 { 00:12:04.154 "name": null, 00:12:04.154 "uuid": "29267839-d490-449b-91fb-7685153e1a19", 00:12:04.154 "is_configured": false, 00:12:04.154 "data_offset": 0, 00:12:04.154 "data_size": 65536 00:12:04.154 }, 00:12:04.154 { 00:12:04.154 "name": "BaseBdev3", 00:12:04.154 "uuid": "0f755b77-2d3d-4b93-ab53-b86230c39b8b", 00:12:04.154 "is_configured": true, 00:12:04.154 "data_offset": 0, 00:12:04.154 "data_size": 65536 00:12:04.154 }, 00:12:04.154 { 00:12:04.154 "name": "BaseBdev4", 00:12:04.154 "uuid": "0e5cbda2-e5b1-4228-8754-dbfe00e17b94", 00:12:04.154 "is_configured": true, 00:12:04.154 "data_offset": 0, 00:12:04.154 "data_size": 65536 00:12:04.154 } 00:12:04.154 ] 00:12:04.154 }' 00:12:04.154 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.154 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.721 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:04.721 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.721 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.721 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.721 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.721 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:04.721 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:04.721 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.721 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.721 [2024-11-27 14:11:41.917542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:04.721 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.721 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:04.721 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:04.721 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:04.721 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:04.721 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:04.721 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:04.721 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:04.721 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:04.721 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:04.721 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:04.721 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:04.721 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.721 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:04.721 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.721 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.721 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:04.721 "name": "Existed_Raid", 00:12:04.721 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.721 "strip_size_kb": 64, 00:12:04.721 "state": "configuring", 00:12:04.721 "raid_level": "raid0", 00:12:04.721 "superblock": false, 00:12:04.721 "num_base_bdevs": 4, 00:12:04.721 "num_base_bdevs_discovered": 3, 00:12:04.721 "num_base_bdevs_operational": 4, 00:12:04.721 "base_bdevs_list": [ 00:12:04.721 { 00:12:04.721 "name": null, 00:12:04.721 "uuid": "dc808209-da56-480c-baea-aca35c530f3c", 00:12:04.721 "is_configured": false, 00:12:04.721 "data_offset": 0, 00:12:04.721 "data_size": 65536 00:12:04.721 }, 00:12:04.721 { 00:12:04.721 "name": "BaseBdev2", 00:12:04.721 "uuid": "29267839-d490-449b-91fb-7685153e1a19", 00:12:04.721 "is_configured": true, 00:12:04.721 "data_offset": 0, 00:12:04.721 "data_size": 65536 00:12:04.721 }, 00:12:04.721 { 00:12:04.721 "name": "BaseBdev3", 00:12:04.721 "uuid": "0f755b77-2d3d-4b93-ab53-b86230c39b8b", 00:12:04.721 "is_configured": true, 00:12:04.721 "data_offset": 0, 00:12:04.721 "data_size": 65536 00:12:04.721 }, 00:12:04.721 { 00:12:04.721 "name": "BaseBdev4", 00:12:04.721 "uuid": "0e5cbda2-e5b1-4228-8754-dbfe00e17b94", 00:12:04.721 "is_configured": true, 00:12:04.721 "data_offset": 0, 00:12:04.721 "data_size": 65536 00:12:04.721 } 00:12:04.721 ] 00:12:04.721 }' 00:12:04.721 14:11:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:04.721 14:11:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.289 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.289 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.289 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:05.289 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.289 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.289 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:05.289 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.289 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.289 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.289 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:05.289 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.289 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u dc808209-da56-480c-baea-aca35c530f3c 00:12:05.289 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.289 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.549 [2024-11-27 14:11:42.584704] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:05.549 NewBaseBdev 00:12:05.549 [2024-11-27 14:11:42.584945] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:05.549 [2024-11-27 14:11:42.584971] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:05.549 [2024-11-27 14:11:42.585313] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:05.549 [2024-11-27 14:11:42.585498] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:05.549 [2024-11-27 14:11:42.585520] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:05.549 [2024-11-27 14:11:42.585841] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:05.549 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.549 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:05.549 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:05.549 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:05.549 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:05.549 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:05.549 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:05.549 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:05.549 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.549 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.549 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.549 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:05.549 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.549 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.549 [ 00:12:05.549 { 00:12:05.549 "name": "NewBaseBdev", 00:12:05.549 "aliases": [ 00:12:05.549 "dc808209-da56-480c-baea-aca35c530f3c" 00:12:05.549 ], 00:12:05.549 "product_name": "Malloc disk", 00:12:05.549 "block_size": 512, 00:12:05.549 "num_blocks": 65536, 00:12:05.549 "uuid": "dc808209-da56-480c-baea-aca35c530f3c", 00:12:05.549 "assigned_rate_limits": { 00:12:05.549 "rw_ios_per_sec": 0, 00:12:05.549 "rw_mbytes_per_sec": 0, 00:12:05.549 "r_mbytes_per_sec": 0, 00:12:05.549 "w_mbytes_per_sec": 0 00:12:05.549 }, 00:12:05.549 "claimed": true, 00:12:05.549 "claim_type": "exclusive_write", 00:12:05.549 "zoned": false, 00:12:05.549 "supported_io_types": { 00:12:05.549 "read": true, 00:12:05.549 "write": true, 00:12:05.549 "unmap": true, 00:12:05.549 "flush": true, 00:12:05.549 "reset": true, 00:12:05.549 "nvme_admin": false, 00:12:05.549 "nvme_io": false, 00:12:05.549 "nvme_io_md": false, 00:12:05.549 "write_zeroes": true, 00:12:05.549 "zcopy": true, 00:12:05.549 "get_zone_info": false, 00:12:05.549 "zone_management": false, 00:12:05.549 "zone_append": false, 00:12:05.549 "compare": false, 00:12:05.549 "compare_and_write": false, 00:12:05.549 "abort": true, 00:12:05.549 "seek_hole": false, 00:12:05.549 "seek_data": false, 00:12:05.549 "copy": true, 00:12:05.549 "nvme_iov_md": false 00:12:05.549 }, 00:12:05.549 "memory_domains": [ 00:12:05.549 { 00:12:05.549 "dma_device_id": "system", 00:12:05.549 "dma_device_type": 1 00:12:05.549 }, 00:12:05.549 { 00:12:05.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:05.549 "dma_device_type": 2 00:12:05.549 } 00:12:05.549 ], 00:12:05.549 "driver_specific": {} 00:12:05.549 } 00:12:05.549 ] 00:12:05.549 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.549 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:05.549 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:05.549 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:05.549 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:05.549 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:05.549 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:05.549 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:05.549 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:05.549 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:05.549 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:05.549 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:05.549 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:05.549 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:05.549 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:05.549 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:05.549 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:05.549 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:05.549 "name": "Existed_Raid", 00:12:05.549 "uuid": "9eccb800-df71-46cb-9eb8-355e56dcbf90", 00:12:05.549 "strip_size_kb": 64, 00:12:05.549 "state": "online", 00:12:05.549 "raid_level": "raid0", 00:12:05.549 "superblock": false, 00:12:05.549 "num_base_bdevs": 4, 00:12:05.549 "num_base_bdevs_discovered": 4, 00:12:05.549 "num_base_bdevs_operational": 4, 00:12:05.549 "base_bdevs_list": [ 00:12:05.549 { 00:12:05.549 "name": "NewBaseBdev", 00:12:05.549 "uuid": "dc808209-da56-480c-baea-aca35c530f3c", 00:12:05.549 "is_configured": true, 00:12:05.549 "data_offset": 0, 00:12:05.549 "data_size": 65536 00:12:05.549 }, 00:12:05.549 { 00:12:05.549 "name": "BaseBdev2", 00:12:05.549 "uuid": "29267839-d490-449b-91fb-7685153e1a19", 00:12:05.549 "is_configured": true, 00:12:05.549 "data_offset": 0, 00:12:05.549 "data_size": 65536 00:12:05.549 }, 00:12:05.549 { 00:12:05.549 "name": "BaseBdev3", 00:12:05.549 "uuid": "0f755b77-2d3d-4b93-ab53-b86230c39b8b", 00:12:05.549 "is_configured": true, 00:12:05.549 "data_offset": 0, 00:12:05.549 "data_size": 65536 00:12:05.549 }, 00:12:05.549 { 00:12:05.549 "name": "BaseBdev4", 00:12:05.549 "uuid": "0e5cbda2-e5b1-4228-8754-dbfe00e17b94", 00:12:05.549 "is_configured": true, 00:12:05.549 "data_offset": 0, 00:12:05.549 "data_size": 65536 00:12:05.549 } 00:12:05.549 ] 00:12:05.549 }' 00:12:05.549 14:11:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:05.549 14:11:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.118 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:06.118 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:06.118 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:06.118 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:06.118 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:06.118 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:06.118 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:06.118 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.118 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.118 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:06.118 [2024-11-27 14:11:43.125398] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:06.118 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.118 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:06.118 "name": "Existed_Raid", 00:12:06.118 "aliases": [ 00:12:06.118 "9eccb800-df71-46cb-9eb8-355e56dcbf90" 00:12:06.118 ], 00:12:06.118 "product_name": "Raid Volume", 00:12:06.118 "block_size": 512, 00:12:06.118 "num_blocks": 262144, 00:12:06.118 "uuid": "9eccb800-df71-46cb-9eb8-355e56dcbf90", 00:12:06.118 "assigned_rate_limits": { 00:12:06.118 "rw_ios_per_sec": 0, 00:12:06.118 "rw_mbytes_per_sec": 0, 00:12:06.118 "r_mbytes_per_sec": 0, 00:12:06.118 "w_mbytes_per_sec": 0 00:12:06.118 }, 00:12:06.118 "claimed": false, 00:12:06.118 "zoned": false, 00:12:06.118 "supported_io_types": { 00:12:06.118 "read": true, 00:12:06.118 "write": true, 00:12:06.118 "unmap": true, 00:12:06.118 "flush": true, 00:12:06.118 "reset": true, 00:12:06.118 "nvme_admin": false, 00:12:06.118 "nvme_io": false, 00:12:06.118 "nvme_io_md": false, 00:12:06.118 "write_zeroes": true, 00:12:06.118 "zcopy": false, 00:12:06.118 "get_zone_info": false, 00:12:06.118 "zone_management": false, 00:12:06.118 "zone_append": false, 00:12:06.118 "compare": false, 00:12:06.118 "compare_and_write": false, 00:12:06.118 "abort": false, 00:12:06.118 "seek_hole": false, 00:12:06.118 "seek_data": false, 00:12:06.118 "copy": false, 00:12:06.118 "nvme_iov_md": false 00:12:06.118 }, 00:12:06.118 "memory_domains": [ 00:12:06.118 { 00:12:06.118 "dma_device_id": "system", 00:12:06.118 "dma_device_type": 1 00:12:06.118 }, 00:12:06.118 { 00:12:06.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.118 "dma_device_type": 2 00:12:06.118 }, 00:12:06.118 { 00:12:06.118 "dma_device_id": "system", 00:12:06.118 "dma_device_type": 1 00:12:06.118 }, 00:12:06.118 { 00:12:06.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.118 "dma_device_type": 2 00:12:06.118 }, 00:12:06.118 { 00:12:06.118 "dma_device_id": "system", 00:12:06.118 "dma_device_type": 1 00:12:06.118 }, 00:12:06.118 { 00:12:06.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.118 "dma_device_type": 2 00:12:06.118 }, 00:12:06.118 { 00:12:06.118 "dma_device_id": "system", 00:12:06.118 "dma_device_type": 1 00:12:06.118 }, 00:12:06.118 { 00:12:06.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.118 "dma_device_type": 2 00:12:06.118 } 00:12:06.118 ], 00:12:06.118 "driver_specific": { 00:12:06.118 "raid": { 00:12:06.118 "uuid": "9eccb800-df71-46cb-9eb8-355e56dcbf90", 00:12:06.118 "strip_size_kb": 64, 00:12:06.118 "state": "online", 00:12:06.118 "raid_level": "raid0", 00:12:06.118 "superblock": false, 00:12:06.118 "num_base_bdevs": 4, 00:12:06.118 "num_base_bdevs_discovered": 4, 00:12:06.118 "num_base_bdevs_operational": 4, 00:12:06.118 "base_bdevs_list": [ 00:12:06.118 { 00:12:06.118 "name": "NewBaseBdev", 00:12:06.118 "uuid": "dc808209-da56-480c-baea-aca35c530f3c", 00:12:06.118 "is_configured": true, 00:12:06.118 "data_offset": 0, 00:12:06.118 "data_size": 65536 00:12:06.118 }, 00:12:06.118 { 00:12:06.118 "name": "BaseBdev2", 00:12:06.118 "uuid": "29267839-d490-449b-91fb-7685153e1a19", 00:12:06.118 "is_configured": true, 00:12:06.118 "data_offset": 0, 00:12:06.118 "data_size": 65536 00:12:06.118 }, 00:12:06.118 { 00:12:06.118 "name": "BaseBdev3", 00:12:06.118 "uuid": "0f755b77-2d3d-4b93-ab53-b86230c39b8b", 00:12:06.118 "is_configured": true, 00:12:06.118 "data_offset": 0, 00:12:06.118 "data_size": 65536 00:12:06.118 }, 00:12:06.118 { 00:12:06.118 "name": "BaseBdev4", 00:12:06.118 "uuid": "0e5cbda2-e5b1-4228-8754-dbfe00e17b94", 00:12:06.118 "is_configured": true, 00:12:06.118 "data_offset": 0, 00:12:06.118 "data_size": 65536 00:12:06.118 } 00:12:06.118 ] 00:12:06.118 } 00:12:06.118 } 00:12:06.118 }' 00:12:06.118 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:06.118 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:06.118 BaseBdev2 00:12:06.118 BaseBdev3 00:12:06.118 BaseBdev4' 00:12:06.118 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.118 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:06.118 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.118 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:06.118 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.118 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.118 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.118 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.118 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.118 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.119 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.119 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:06.119 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.119 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.119 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.119 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.119 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.119 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.119 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.119 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:06.119 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.119 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.119 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.379 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.379 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.379 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.379 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:06.379 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:06.379 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:06.379 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.379 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.379 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.379 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:06.379 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:06.379 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:06.379 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.379 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:06.379 [2024-11-27 14:11:43.485016] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:06.379 [2024-11-27 14:11:43.485219] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:06.379 [2024-11-27 14:11:43.485426] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:06.379 [2024-11-27 14:11:43.485618] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:06.379 [2024-11-27 14:11:43.485741] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:06.379 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.379 14:11:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 69363 00:12:06.379 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 69363 ']' 00:12:06.379 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 69363 00:12:06.379 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:06.379 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:06.379 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69363 00:12:06.379 killing process with pid 69363 00:12:06.379 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:06.379 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:06.379 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69363' 00:12:06.379 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 69363 00:12:06.379 [2024-11-27 14:11:43.520670] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:06.379 14:11:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 69363 00:12:06.638 [2024-11-27 14:11:43.868794] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:08.017 ************************************ 00:12:08.017 END TEST raid_state_function_test 00:12:08.017 ************************************ 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:08.017 00:12:08.017 real 0m12.817s 00:12:08.017 user 0m21.281s 00:12:08.017 sys 0m1.709s 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:08.017 14:11:44 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:12:08.017 14:11:44 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:08.017 14:11:44 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:08.017 14:11:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:08.017 ************************************ 00:12:08.017 START TEST raid_state_function_test_sb 00:12:08.017 ************************************ 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid0 4 true 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid0 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid0 '!=' raid1 ']' 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=70051 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 70051' 00:12:08.017 Process raid pid: 70051 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 70051 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 70051 ']' 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:08.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:08.017 14:11:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.017 [2024-11-27 14:11:45.145631] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:12:08.017 [2024-11-27 14:11:45.145902] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.340 [2024-11-27 14:11:45.332835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:08.340 [2024-11-27 14:11:45.462766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.605 [2024-11-27 14:11:45.671242] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.605 [2024-11-27 14:11:45.671525] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:08.863 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:08.863 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:08.863 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:08.863 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.863 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.863 [2024-11-27 14:11:46.078432] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:08.863 [2024-11-27 14:11:46.078518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:08.863 [2024-11-27 14:11:46.078535] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:08.863 [2024-11-27 14:11:46.078550] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:08.863 [2024-11-27 14:11:46.078559] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:08.863 [2024-11-27 14:11:46.078601] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:08.863 [2024-11-27 14:11:46.078612] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:08.863 [2024-11-27 14:11:46.078627] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:08.863 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.863 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:08.863 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:08.863 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:08.863 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:08.863 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:08.863 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:08.863 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:08.863 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:08.863 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:08.863 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:08.863 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:08.863 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.863 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.863 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.863 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.122 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.122 "name": "Existed_Raid", 00:12:09.122 "uuid": "c04fd95f-6735-41d1-af97-eaa4564b92cc", 00:12:09.122 "strip_size_kb": 64, 00:12:09.122 "state": "configuring", 00:12:09.122 "raid_level": "raid0", 00:12:09.122 "superblock": true, 00:12:09.122 "num_base_bdevs": 4, 00:12:09.122 "num_base_bdevs_discovered": 0, 00:12:09.122 "num_base_bdevs_operational": 4, 00:12:09.122 "base_bdevs_list": [ 00:12:09.122 { 00:12:09.122 "name": "BaseBdev1", 00:12:09.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.122 "is_configured": false, 00:12:09.122 "data_offset": 0, 00:12:09.122 "data_size": 0 00:12:09.122 }, 00:12:09.122 { 00:12:09.122 "name": "BaseBdev2", 00:12:09.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.122 "is_configured": false, 00:12:09.122 "data_offset": 0, 00:12:09.122 "data_size": 0 00:12:09.122 }, 00:12:09.122 { 00:12:09.122 "name": "BaseBdev3", 00:12:09.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.122 "is_configured": false, 00:12:09.122 "data_offset": 0, 00:12:09.122 "data_size": 0 00:12:09.122 }, 00:12:09.122 { 00:12:09.122 "name": "BaseBdev4", 00:12:09.122 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.122 "is_configured": false, 00:12:09.122 "data_offset": 0, 00:12:09.122 "data_size": 0 00:12:09.122 } 00:12:09.122 ] 00:12:09.122 }' 00:12:09.122 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.122 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.381 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:09.381 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.381 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.381 [2024-11-27 14:11:46.606545] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:09.381 [2024-11-27 14:11:46.606606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:09.381 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.381 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:09.381 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.381 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.381 [2024-11-27 14:11:46.618645] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:09.381 [2024-11-27 14:11:46.618840] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:09.381 [2024-11-27 14:11:46.618968] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:09.381 [2024-11-27 14:11:46.619031] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:09.381 [2024-11-27 14:11:46.619137] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:09.381 [2024-11-27 14:11:46.619171] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:09.381 [2024-11-27 14:11:46.619184] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:09.381 [2024-11-27 14:11:46.619199] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:09.381 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.381 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:09.381 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.381 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.639 [2024-11-27 14:11:46.665327] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:09.639 BaseBdev1 00:12:09.639 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.639 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:09.639 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:09.639 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:09.639 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:09.639 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:09.639 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:09.639 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:09.639 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.639 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.639 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.639 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:09.639 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.639 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.639 [ 00:12:09.639 { 00:12:09.639 "name": "BaseBdev1", 00:12:09.639 "aliases": [ 00:12:09.639 "1734547f-61a4-4fe8-a5d7-dc4738711ad1" 00:12:09.639 ], 00:12:09.639 "product_name": "Malloc disk", 00:12:09.639 "block_size": 512, 00:12:09.639 "num_blocks": 65536, 00:12:09.639 "uuid": "1734547f-61a4-4fe8-a5d7-dc4738711ad1", 00:12:09.639 "assigned_rate_limits": { 00:12:09.639 "rw_ios_per_sec": 0, 00:12:09.639 "rw_mbytes_per_sec": 0, 00:12:09.639 "r_mbytes_per_sec": 0, 00:12:09.639 "w_mbytes_per_sec": 0 00:12:09.639 }, 00:12:09.639 "claimed": true, 00:12:09.639 "claim_type": "exclusive_write", 00:12:09.639 "zoned": false, 00:12:09.639 "supported_io_types": { 00:12:09.639 "read": true, 00:12:09.639 "write": true, 00:12:09.639 "unmap": true, 00:12:09.639 "flush": true, 00:12:09.639 "reset": true, 00:12:09.639 "nvme_admin": false, 00:12:09.639 "nvme_io": false, 00:12:09.639 "nvme_io_md": false, 00:12:09.639 "write_zeroes": true, 00:12:09.639 "zcopy": true, 00:12:09.639 "get_zone_info": false, 00:12:09.639 "zone_management": false, 00:12:09.639 "zone_append": false, 00:12:09.639 "compare": false, 00:12:09.640 "compare_and_write": false, 00:12:09.640 "abort": true, 00:12:09.640 "seek_hole": false, 00:12:09.640 "seek_data": false, 00:12:09.640 "copy": true, 00:12:09.640 "nvme_iov_md": false 00:12:09.640 }, 00:12:09.640 "memory_domains": [ 00:12:09.640 { 00:12:09.640 "dma_device_id": "system", 00:12:09.640 "dma_device_type": 1 00:12:09.640 }, 00:12:09.640 { 00:12:09.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:09.640 "dma_device_type": 2 00:12:09.640 } 00:12:09.640 ], 00:12:09.640 "driver_specific": {} 00:12:09.640 } 00:12:09.640 ] 00:12:09.640 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.640 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:09.640 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:09.640 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:09.640 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:09.640 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:09.640 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:09.640 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:09.640 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:09.640 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:09.640 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:09.640 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:09.640 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:09.640 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.640 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.640 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.640 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.640 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:09.640 "name": "Existed_Raid", 00:12:09.640 "uuid": "3960ff99-111d-47e7-8e14-a858b815a8e9", 00:12:09.640 "strip_size_kb": 64, 00:12:09.640 "state": "configuring", 00:12:09.640 "raid_level": "raid0", 00:12:09.640 "superblock": true, 00:12:09.640 "num_base_bdevs": 4, 00:12:09.640 "num_base_bdevs_discovered": 1, 00:12:09.640 "num_base_bdevs_operational": 4, 00:12:09.640 "base_bdevs_list": [ 00:12:09.640 { 00:12:09.640 "name": "BaseBdev1", 00:12:09.640 "uuid": "1734547f-61a4-4fe8-a5d7-dc4738711ad1", 00:12:09.640 "is_configured": true, 00:12:09.640 "data_offset": 2048, 00:12:09.640 "data_size": 63488 00:12:09.640 }, 00:12:09.640 { 00:12:09.640 "name": "BaseBdev2", 00:12:09.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.640 "is_configured": false, 00:12:09.640 "data_offset": 0, 00:12:09.640 "data_size": 0 00:12:09.640 }, 00:12:09.640 { 00:12:09.640 "name": "BaseBdev3", 00:12:09.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.640 "is_configured": false, 00:12:09.640 "data_offset": 0, 00:12:09.640 "data_size": 0 00:12:09.640 }, 00:12:09.640 { 00:12:09.640 "name": "BaseBdev4", 00:12:09.640 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.640 "is_configured": false, 00:12:09.640 "data_offset": 0, 00:12:09.640 "data_size": 0 00:12:09.640 } 00:12:09.640 ] 00:12:09.640 }' 00:12:09.640 14:11:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:09.640 14:11:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.207 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:10.207 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.207 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.207 [2024-11-27 14:11:47.237602] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:10.207 [2024-11-27 14:11:47.237890] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:10.207 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.207 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:10.207 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.207 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.207 [2024-11-27 14:11:47.249724] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:10.207 [2024-11-27 14:11:47.252481] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:10.207 [2024-11-27 14:11:47.252545] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:10.207 [2024-11-27 14:11:47.252562] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:10.207 [2024-11-27 14:11:47.252580] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:10.207 [2024-11-27 14:11:47.252591] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:10.207 [2024-11-27 14:11:47.252605] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:10.207 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.207 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:10.207 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:10.207 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:10.207 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.207 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.207 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:10.207 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:10.207 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.207 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.207 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.207 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.207 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.207 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.207 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.207 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.207 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.207 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.207 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.207 "name": "Existed_Raid", 00:12:10.207 "uuid": "3745aa92-79a7-458c-b12f-ff5388475fe9", 00:12:10.207 "strip_size_kb": 64, 00:12:10.207 "state": "configuring", 00:12:10.207 "raid_level": "raid0", 00:12:10.207 "superblock": true, 00:12:10.207 "num_base_bdevs": 4, 00:12:10.207 "num_base_bdevs_discovered": 1, 00:12:10.207 "num_base_bdevs_operational": 4, 00:12:10.207 "base_bdevs_list": [ 00:12:10.207 { 00:12:10.207 "name": "BaseBdev1", 00:12:10.207 "uuid": "1734547f-61a4-4fe8-a5d7-dc4738711ad1", 00:12:10.207 "is_configured": true, 00:12:10.208 "data_offset": 2048, 00:12:10.208 "data_size": 63488 00:12:10.208 }, 00:12:10.208 { 00:12:10.208 "name": "BaseBdev2", 00:12:10.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.208 "is_configured": false, 00:12:10.208 "data_offset": 0, 00:12:10.208 "data_size": 0 00:12:10.208 }, 00:12:10.208 { 00:12:10.208 "name": "BaseBdev3", 00:12:10.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.208 "is_configured": false, 00:12:10.208 "data_offset": 0, 00:12:10.208 "data_size": 0 00:12:10.208 }, 00:12:10.208 { 00:12:10.208 "name": "BaseBdev4", 00:12:10.208 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.208 "is_configured": false, 00:12:10.208 "data_offset": 0, 00:12:10.208 "data_size": 0 00:12:10.208 } 00:12:10.208 ] 00:12:10.208 }' 00:12:10.208 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.208 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.776 [2024-11-27 14:11:47.849410] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:10.776 BaseBdev2 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.776 [ 00:12:10.776 { 00:12:10.776 "name": "BaseBdev2", 00:12:10.776 "aliases": [ 00:12:10.776 "529a3b6d-9ba6-477f-a276-5ebe0e225a83" 00:12:10.776 ], 00:12:10.776 "product_name": "Malloc disk", 00:12:10.776 "block_size": 512, 00:12:10.776 "num_blocks": 65536, 00:12:10.776 "uuid": "529a3b6d-9ba6-477f-a276-5ebe0e225a83", 00:12:10.776 "assigned_rate_limits": { 00:12:10.776 "rw_ios_per_sec": 0, 00:12:10.776 "rw_mbytes_per_sec": 0, 00:12:10.776 "r_mbytes_per_sec": 0, 00:12:10.776 "w_mbytes_per_sec": 0 00:12:10.776 }, 00:12:10.776 "claimed": true, 00:12:10.776 "claim_type": "exclusive_write", 00:12:10.776 "zoned": false, 00:12:10.776 "supported_io_types": { 00:12:10.776 "read": true, 00:12:10.776 "write": true, 00:12:10.776 "unmap": true, 00:12:10.776 "flush": true, 00:12:10.776 "reset": true, 00:12:10.776 "nvme_admin": false, 00:12:10.776 "nvme_io": false, 00:12:10.776 "nvme_io_md": false, 00:12:10.776 "write_zeroes": true, 00:12:10.776 "zcopy": true, 00:12:10.776 "get_zone_info": false, 00:12:10.776 "zone_management": false, 00:12:10.776 "zone_append": false, 00:12:10.776 "compare": false, 00:12:10.776 "compare_and_write": false, 00:12:10.776 "abort": true, 00:12:10.776 "seek_hole": false, 00:12:10.776 "seek_data": false, 00:12:10.776 "copy": true, 00:12:10.776 "nvme_iov_md": false 00:12:10.776 }, 00:12:10.776 "memory_domains": [ 00:12:10.776 { 00:12:10.776 "dma_device_id": "system", 00:12:10.776 "dma_device_type": 1 00:12:10.776 }, 00:12:10.776 { 00:12:10.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.776 "dma_device_type": 2 00:12:10.776 } 00:12:10.776 ], 00:12:10.776 "driver_specific": {} 00:12:10.776 } 00:12:10.776 ] 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.776 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:10.776 "name": "Existed_Raid", 00:12:10.776 "uuid": "3745aa92-79a7-458c-b12f-ff5388475fe9", 00:12:10.776 "strip_size_kb": 64, 00:12:10.776 "state": "configuring", 00:12:10.776 "raid_level": "raid0", 00:12:10.776 "superblock": true, 00:12:10.776 "num_base_bdevs": 4, 00:12:10.776 "num_base_bdevs_discovered": 2, 00:12:10.776 "num_base_bdevs_operational": 4, 00:12:10.776 "base_bdevs_list": [ 00:12:10.776 { 00:12:10.776 "name": "BaseBdev1", 00:12:10.776 "uuid": "1734547f-61a4-4fe8-a5d7-dc4738711ad1", 00:12:10.776 "is_configured": true, 00:12:10.776 "data_offset": 2048, 00:12:10.776 "data_size": 63488 00:12:10.776 }, 00:12:10.776 { 00:12:10.776 "name": "BaseBdev2", 00:12:10.776 "uuid": "529a3b6d-9ba6-477f-a276-5ebe0e225a83", 00:12:10.776 "is_configured": true, 00:12:10.776 "data_offset": 2048, 00:12:10.776 "data_size": 63488 00:12:10.776 }, 00:12:10.776 { 00:12:10.776 "name": "BaseBdev3", 00:12:10.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.777 "is_configured": false, 00:12:10.777 "data_offset": 0, 00:12:10.777 "data_size": 0 00:12:10.777 }, 00:12:10.777 { 00:12:10.777 "name": "BaseBdev4", 00:12:10.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:10.777 "is_configured": false, 00:12:10.777 "data_offset": 0, 00:12:10.777 "data_size": 0 00:12:10.777 } 00:12:10.777 ] 00:12:10.777 }' 00:12:10.777 14:11:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:10.777 14:11:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.345 [2024-11-27 14:11:48.443770] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:11.345 BaseBdev3 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.345 [ 00:12:11.345 { 00:12:11.345 "name": "BaseBdev3", 00:12:11.345 "aliases": [ 00:12:11.345 "8502398a-1929-4d5d-835c-56c7b7340bd4" 00:12:11.345 ], 00:12:11.345 "product_name": "Malloc disk", 00:12:11.345 "block_size": 512, 00:12:11.345 "num_blocks": 65536, 00:12:11.345 "uuid": "8502398a-1929-4d5d-835c-56c7b7340bd4", 00:12:11.345 "assigned_rate_limits": { 00:12:11.345 "rw_ios_per_sec": 0, 00:12:11.345 "rw_mbytes_per_sec": 0, 00:12:11.345 "r_mbytes_per_sec": 0, 00:12:11.345 "w_mbytes_per_sec": 0 00:12:11.345 }, 00:12:11.345 "claimed": true, 00:12:11.345 "claim_type": "exclusive_write", 00:12:11.345 "zoned": false, 00:12:11.345 "supported_io_types": { 00:12:11.345 "read": true, 00:12:11.345 "write": true, 00:12:11.345 "unmap": true, 00:12:11.345 "flush": true, 00:12:11.345 "reset": true, 00:12:11.345 "nvme_admin": false, 00:12:11.345 "nvme_io": false, 00:12:11.345 "nvme_io_md": false, 00:12:11.345 "write_zeroes": true, 00:12:11.345 "zcopy": true, 00:12:11.345 "get_zone_info": false, 00:12:11.345 "zone_management": false, 00:12:11.345 "zone_append": false, 00:12:11.345 "compare": false, 00:12:11.345 "compare_and_write": false, 00:12:11.345 "abort": true, 00:12:11.345 "seek_hole": false, 00:12:11.345 "seek_data": false, 00:12:11.345 "copy": true, 00:12:11.345 "nvme_iov_md": false 00:12:11.345 }, 00:12:11.345 "memory_domains": [ 00:12:11.345 { 00:12:11.345 "dma_device_id": "system", 00:12:11.345 "dma_device_type": 1 00:12:11.345 }, 00:12:11.345 { 00:12:11.345 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.345 "dma_device_type": 2 00:12:11.345 } 00:12:11.345 ], 00:12:11.345 "driver_specific": {} 00:12:11.345 } 00:12:11.345 ] 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.345 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.345 "name": "Existed_Raid", 00:12:11.345 "uuid": "3745aa92-79a7-458c-b12f-ff5388475fe9", 00:12:11.345 "strip_size_kb": 64, 00:12:11.345 "state": "configuring", 00:12:11.345 "raid_level": "raid0", 00:12:11.345 "superblock": true, 00:12:11.345 "num_base_bdevs": 4, 00:12:11.345 "num_base_bdevs_discovered": 3, 00:12:11.345 "num_base_bdevs_operational": 4, 00:12:11.345 "base_bdevs_list": [ 00:12:11.345 { 00:12:11.345 "name": "BaseBdev1", 00:12:11.345 "uuid": "1734547f-61a4-4fe8-a5d7-dc4738711ad1", 00:12:11.345 "is_configured": true, 00:12:11.345 "data_offset": 2048, 00:12:11.345 "data_size": 63488 00:12:11.345 }, 00:12:11.345 { 00:12:11.345 "name": "BaseBdev2", 00:12:11.345 "uuid": "529a3b6d-9ba6-477f-a276-5ebe0e225a83", 00:12:11.345 "is_configured": true, 00:12:11.345 "data_offset": 2048, 00:12:11.345 "data_size": 63488 00:12:11.345 }, 00:12:11.345 { 00:12:11.345 "name": "BaseBdev3", 00:12:11.345 "uuid": "8502398a-1929-4d5d-835c-56c7b7340bd4", 00:12:11.345 "is_configured": true, 00:12:11.345 "data_offset": 2048, 00:12:11.345 "data_size": 63488 00:12:11.345 }, 00:12:11.345 { 00:12:11.345 "name": "BaseBdev4", 00:12:11.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:11.346 "is_configured": false, 00:12:11.346 "data_offset": 0, 00:12:11.346 "data_size": 0 00:12:11.346 } 00:12:11.346 ] 00:12:11.346 }' 00:12:11.346 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.346 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.914 14:11:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:11.914 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.914 14:11:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.914 [2024-11-27 14:11:49.026941] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:11.914 [2024-11-27 14:11:49.027456] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:11.914 [2024-11-27 14:11:49.027483] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:11.914 BaseBdev4 00:12:11.914 [2024-11-27 14:11:49.027840] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:11.914 [2024-11-27 14:11:49.028033] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:11.914 [2024-11-27 14:11:49.028061] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:11.914 [2024-11-27 14:11:49.028234] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:11.914 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.914 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:11.914 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:11.914 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:11.914 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:11.914 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:11.914 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:11.914 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:11.914 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.914 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.914 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.914 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:11.914 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.914 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.914 [ 00:12:11.914 { 00:12:11.914 "name": "BaseBdev4", 00:12:11.914 "aliases": [ 00:12:11.914 "af248a42-6f3b-4f11-ad26-0311c83c6b9b" 00:12:11.914 ], 00:12:11.914 "product_name": "Malloc disk", 00:12:11.914 "block_size": 512, 00:12:11.914 "num_blocks": 65536, 00:12:11.914 "uuid": "af248a42-6f3b-4f11-ad26-0311c83c6b9b", 00:12:11.914 "assigned_rate_limits": { 00:12:11.914 "rw_ios_per_sec": 0, 00:12:11.914 "rw_mbytes_per_sec": 0, 00:12:11.914 "r_mbytes_per_sec": 0, 00:12:11.914 "w_mbytes_per_sec": 0 00:12:11.914 }, 00:12:11.914 "claimed": true, 00:12:11.914 "claim_type": "exclusive_write", 00:12:11.914 "zoned": false, 00:12:11.914 "supported_io_types": { 00:12:11.914 "read": true, 00:12:11.914 "write": true, 00:12:11.914 "unmap": true, 00:12:11.914 "flush": true, 00:12:11.914 "reset": true, 00:12:11.914 "nvme_admin": false, 00:12:11.914 "nvme_io": false, 00:12:11.914 "nvme_io_md": false, 00:12:11.914 "write_zeroes": true, 00:12:11.914 "zcopy": true, 00:12:11.914 "get_zone_info": false, 00:12:11.914 "zone_management": false, 00:12:11.914 "zone_append": false, 00:12:11.914 "compare": false, 00:12:11.914 "compare_and_write": false, 00:12:11.914 "abort": true, 00:12:11.914 "seek_hole": false, 00:12:11.914 "seek_data": false, 00:12:11.914 "copy": true, 00:12:11.914 "nvme_iov_md": false 00:12:11.914 }, 00:12:11.914 "memory_domains": [ 00:12:11.914 { 00:12:11.914 "dma_device_id": "system", 00:12:11.914 "dma_device_type": 1 00:12:11.914 }, 00:12:11.914 { 00:12:11.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:11.914 "dma_device_type": 2 00:12:11.914 } 00:12:11.914 ], 00:12:11.914 "driver_specific": {} 00:12:11.914 } 00:12:11.914 ] 00:12:11.914 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.914 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:11.914 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:11.914 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:11.914 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:11.914 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:11.914 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:11.914 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:11.914 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:11.915 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:11.915 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:11.915 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:11.915 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:11.915 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:11.915 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:11.915 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.915 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.915 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.915 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.915 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:11.915 "name": "Existed_Raid", 00:12:11.915 "uuid": "3745aa92-79a7-458c-b12f-ff5388475fe9", 00:12:11.915 "strip_size_kb": 64, 00:12:11.915 "state": "online", 00:12:11.915 "raid_level": "raid0", 00:12:11.915 "superblock": true, 00:12:11.915 "num_base_bdevs": 4, 00:12:11.915 "num_base_bdevs_discovered": 4, 00:12:11.915 "num_base_bdevs_operational": 4, 00:12:11.915 "base_bdevs_list": [ 00:12:11.915 { 00:12:11.915 "name": "BaseBdev1", 00:12:11.915 "uuid": "1734547f-61a4-4fe8-a5d7-dc4738711ad1", 00:12:11.915 "is_configured": true, 00:12:11.915 "data_offset": 2048, 00:12:11.915 "data_size": 63488 00:12:11.915 }, 00:12:11.915 { 00:12:11.915 "name": "BaseBdev2", 00:12:11.915 "uuid": "529a3b6d-9ba6-477f-a276-5ebe0e225a83", 00:12:11.915 "is_configured": true, 00:12:11.915 "data_offset": 2048, 00:12:11.915 "data_size": 63488 00:12:11.915 }, 00:12:11.915 { 00:12:11.915 "name": "BaseBdev3", 00:12:11.915 "uuid": "8502398a-1929-4d5d-835c-56c7b7340bd4", 00:12:11.915 "is_configured": true, 00:12:11.915 "data_offset": 2048, 00:12:11.915 "data_size": 63488 00:12:11.915 }, 00:12:11.915 { 00:12:11.915 "name": "BaseBdev4", 00:12:11.915 "uuid": "af248a42-6f3b-4f11-ad26-0311c83c6b9b", 00:12:11.915 "is_configured": true, 00:12:11.915 "data_offset": 2048, 00:12:11.915 "data_size": 63488 00:12:11.915 } 00:12:11.915 ] 00:12:11.915 }' 00:12:11.915 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:11.915 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.525 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:12.525 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:12.525 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:12.525 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:12.525 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:12.525 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:12.525 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:12.525 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:12.525 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.525 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.525 [2024-11-27 14:11:49.607791] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:12.525 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.525 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:12.525 "name": "Existed_Raid", 00:12:12.525 "aliases": [ 00:12:12.525 "3745aa92-79a7-458c-b12f-ff5388475fe9" 00:12:12.525 ], 00:12:12.525 "product_name": "Raid Volume", 00:12:12.525 "block_size": 512, 00:12:12.525 "num_blocks": 253952, 00:12:12.525 "uuid": "3745aa92-79a7-458c-b12f-ff5388475fe9", 00:12:12.525 "assigned_rate_limits": { 00:12:12.525 "rw_ios_per_sec": 0, 00:12:12.525 "rw_mbytes_per_sec": 0, 00:12:12.525 "r_mbytes_per_sec": 0, 00:12:12.525 "w_mbytes_per_sec": 0 00:12:12.525 }, 00:12:12.525 "claimed": false, 00:12:12.525 "zoned": false, 00:12:12.525 "supported_io_types": { 00:12:12.525 "read": true, 00:12:12.525 "write": true, 00:12:12.525 "unmap": true, 00:12:12.525 "flush": true, 00:12:12.525 "reset": true, 00:12:12.525 "nvme_admin": false, 00:12:12.525 "nvme_io": false, 00:12:12.525 "nvme_io_md": false, 00:12:12.525 "write_zeroes": true, 00:12:12.525 "zcopy": false, 00:12:12.525 "get_zone_info": false, 00:12:12.525 "zone_management": false, 00:12:12.525 "zone_append": false, 00:12:12.525 "compare": false, 00:12:12.525 "compare_and_write": false, 00:12:12.525 "abort": false, 00:12:12.525 "seek_hole": false, 00:12:12.525 "seek_data": false, 00:12:12.525 "copy": false, 00:12:12.525 "nvme_iov_md": false 00:12:12.525 }, 00:12:12.525 "memory_domains": [ 00:12:12.525 { 00:12:12.525 "dma_device_id": "system", 00:12:12.525 "dma_device_type": 1 00:12:12.525 }, 00:12:12.525 { 00:12:12.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.525 "dma_device_type": 2 00:12:12.525 }, 00:12:12.525 { 00:12:12.525 "dma_device_id": "system", 00:12:12.525 "dma_device_type": 1 00:12:12.525 }, 00:12:12.525 { 00:12:12.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.525 "dma_device_type": 2 00:12:12.525 }, 00:12:12.525 { 00:12:12.525 "dma_device_id": "system", 00:12:12.525 "dma_device_type": 1 00:12:12.525 }, 00:12:12.525 { 00:12:12.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.525 "dma_device_type": 2 00:12:12.525 }, 00:12:12.525 { 00:12:12.525 "dma_device_id": "system", 00:12:12.525 "dma_device_type": 1 00:12:12.525 }, 00:12:12.525 { 00:12:12.525 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:12.525 "dma_device_type": 2 00:12:12.525 } 00:12:12.525 ], 00:12:12.525 "driver_specific": { 00:12:12.525 "raid": { 00:12:12.525 "uuid": "3745aa92-79a7-458c-b12f-ff5388475fe9", 00:12:12.525 "strip_size_kb": 64, 00:12:12.525 "state": "online", 00:12:12.525 "raid_level": "raid0", 00:12:12.525 "superblock": true, 00:12:12.525 "num_base_bdevs": 4, 00:12:12.525 "num_base_bdevs_discovered": 4, 00:12:12.525 "num_base_bdevs_operational": 4, 00:12:12.525 "base_bdevs_list": [ 00:12:12.525 { 00:12:12.525 "name": "BaseBdev1", 00:12:12.525 "uuid": "1734547f-61a4-4fe8-a5d7-dc4738711ad1", 00:12:12.525 "is_configured": true, 00:12:12.525 "data_offset": 2048, 00:12:12.525 "data_size": 63488 00:12:12.525 }, 00:12:12.525 { 00:12:12.525 "name": "BaseBdev2", 00:12:12.525 "uuid": "529a3b6d-9ba6-477f-a276-5ebe0e225a83", 00:12:12.525 "is_configured": true, 00:12:12.525 "data_offset": 2048, 00:12:12.525 "data_size": 63488 00:12:12.525 }, 00:12:12.525 { 00:12:12.525 "name": "BaseBdev3", 00:12:12.525 "uuid": "8502398a-1929-4d5d-835c-56c7b7340bd4", 00:12:12.525 "is_configured": true, 00:12:12.525 "data_offset": 2048, 00:12:12.525 "data_size": 63488 00:12:12.525 }, 00:12:12.525 { 00:12:12.525 "name": "BaseBdev4", 00:12:12.525 "uuid": "af248a42-6f3b-4f11-ad26-0311c83c6b9b", 00:12:12.525 "is_configured": true, 00:12:12.525 "data_offset": 2048, 00:12:12.525 "data_size": 63488 00:12:12.525 } 00:12:12.525 ] 00:12:12.525 } 00:12:12.525 } 00:12:12.525 }' 00:12:12.525 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:12.525 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:12.525 BaseBdev2 00:12:12.525 BaseBdev3 00:12:12.525 BaseBdev4' 00:12:12.525 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.525 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:12.525 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.525 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.525 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:12.525 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.525 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.525 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.784 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.784 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.784 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.784 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:12.784 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.784 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.784 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.784 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.784 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.784 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.784 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.784 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:12.784 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.784 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.784 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.784 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.784 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.784 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.784 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:12.784 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:12.785 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:12.785 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.785 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.785 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.785 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:12.785 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:12.785 14:11:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:12.785 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.785 14:11:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:12.785 [2024-11-27 14:11:49.975452] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:12.785 [2024-11-27 14:11:49.975634] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:12.785 [2024-11-27 14:11:49.975735] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:13.076 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.076 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:13.076 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid0 00:12:13.076 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:13.076 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:13.076 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:13.076 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:13.076 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:13.076 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:13.076 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:13.076 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:13.076 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:13.076 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:13.076 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:13.076 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:13.076 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:13.076 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.076 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.076 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.076 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.076 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.076 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:13.076 "name": "Existed_Raid", 00:12:13.076 "uuid": "3745aa92-79a7-458c-b12f-ff5388475fe9", 00:12:13.076 "strip_size_kb": 64, 00:12:13.076 "state": "offline", 00:12:13.076 "raid_level": "raid0", 00:12:13.076 "superblock": true, 00:12:13.076 "num_base_bdevs": 4, 00:12:13.076 "num_base_bdevs_discovered": 3, 00:12:13.076 "num_base_bdevs_operational": 3, 00:12:13.076 "base_bdevs_list": [ 00:12:13.076 { 00:12:13.076 "name": null, 00:12:13.076 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:13.076 "is_configured": false, 00:12:13.076 "data_offset": 0, 00:12:13.076 "data_size": 63488 00:12:13.076 }, 00:12:13.076 { 00:12:13.076 "name": "BaseBdev2", 00:12:13.076 "uuid": "529a3b6d-9ba6-477f-a276-5ebe0e225a83", 00:12:13.076 "is_configured": true, 00:12:13.076 "data_offset": 2048, 00:12:13.076 "data_size": 63488 00:12:13.076 }, 00:12:13.076 { 00:12:13.076 "name": "BaseBdev3", 00:12:13.076 "uuid": "8502398a-1929-4d5d-835c-56c7b7340bd4", 00:12:13.076 "is_configured": true, 00:12:13.076 "data_offset": 2048, 00:12:13.076 "data_size": 63488 00:12:13.076 }, 00:12:13.076 { 00:12:13.076 "name": "BaseBdev4", 00:12:13.076 "uuid": "af248a42-6f3b-4f11-ad26-0311c83c6b9b", 00:12:13.076 "is_configured": true, 00:12:13.076 "data_offset": 2048, 00:12:13.076 "data_size": 63488 00:12:13.076 } 00:12:13.076 ] 00:12:13.076 }' 00:12:13.076 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:13.076 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.334 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:13.334 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:13.334 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.334 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:13.334 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.334 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.334 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.593 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:13.593 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:13.593 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:13.593 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.593 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.593 [2024-11-27 14:11:50.622662] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:13.593 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.593 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:13.593 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:13.593 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.593 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:13.593 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.593 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.593 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.593 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:13.593 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:13.593 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:13.593 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.593 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.593 [2024-11-27 14:11:50.775353] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:13.593 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.593 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:13.593 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:13.593 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:13.593 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.593 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.593 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.852 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.852 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:13.852 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:13.852 14:11:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:13.852 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.852 14:11:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.852 [2024-11-27 14:11:50.912701] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:13.852 [2024-11-27 14:11:50.912757] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:13.852 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.852 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:13.852 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:13.852 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:13.852 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:13.852 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.852 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.852 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.852 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:13.852 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:13.852 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:13.852 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:13.852 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:13.852 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:13.852 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.852 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.852 BaseBdev2 00:12:13.852 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.852 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:13.852 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:13.852 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:13.852 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:13.852 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:13.852 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:13.852 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:13.852 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.852 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.853 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.853 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:13.853 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.853 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.853 [ 00:12:13.853 { 00:12:13.853 "name": "BaseBdev2", 00:12:13.853 "aliases": [ 00:12:13.853 "dc59ab37-374e-411a-8c69-ebf720a3852e" 00:12:13.853 ], 00:12:13.853 "product_name": "Malloc disk", 00:12:13.853 "block_size": 512, 00:12:13.853 "num_blocks": 65536, 00:12:13.853 "uuid": "dc59ab37-374e-411a-8c69-ebf720a3852e", 00:12:13.853 "assigned_rate_limits": { 00:12:13.853 "rw_ios_per_sec": 0, 00:12:13.853 "rw_mbytes_per_sec": 0, 00:12:13.853 "r_mbytes_per_sec": 0, 00:12:13.853 "w_mbytes_per_sec": 0 00:12:13.853 }, 00:12:13.853 "claimed": false, 00:12:13.853 "zoned": false, 00:12:13.853 "supported_io_types": { 00:12:13.853 "read": true, 00:12:13.853 "write": true, 00:12:13.853 "unmap": true, 00:12:13.853 "flush": true, 00:12:13.853 "reset": true, 00:12:14.112 "nvme_admin": false, 00:12:14.113 "nvme_io": false, 00:12:14.113 "nvme_io_md": false, 00:12:14.113 "write_zeroes": true, 00:12:14.113 "zcopy": true, 00:12:14.113 "get_zone_info": false, 00:12:14.113 "zone_management": false, 00:12:14.113 "zone_append": false, 00:12:14.113 "compare": false, 00:12:14.113 "compare_and_write": false, 00:12:14.113 "abort": true, 00:12:14.113 "seek_hole": false, 00:12:14.113 "seek_data": false, 00:12:14.113 "copy": true, 00:12:14.113 "nvme_iov_md": false 00:12:14.113 }, 00:12:14.113 "memory_domains": [ 00:12:14.113 { 00:12:14.113 "dma_device_id": "system", 00:12:14.113 "dma_device_type": 1 00:12:14.113 }, 00:12:14.113 { 00:12:14.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.113 "dma_device_type": 2 00:12:14.113 } 00:12:14.113 ], 00:12:14.113 "driver_specific": {} 00:12:14.113 } 00:12:14.113 ] 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.113 BaseBdev3 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.113 [ 00:12:14.113 { 00:12:14.113 "name": "BaseBdev3", 00:12:14.113 "aliases": [ 00:12:14.113 "0aeee6fc-e668-469f-8aae-52eaf42c0dc5" 00:12:14.113 ], 00:12:14.113 "product_name": "Malloc disk", 00:12:14.113 "block_size": 512, 00:12:14.113 "num_blocks": 65536, 00:12:14.113 "uuid": "0aeee6fc-e668-469f-8aae-52eaf42c0dc5", 00:12:14.113 "assigned_rate_limits": { 00:12:14.113 "rw_ios_per_sec": 0, 00:12:14.113 "rw_mbytes_per_sec": 0, 00:12:14.113 "r_mbytes_per_sec": 0, 00:12:14.113 "w_mbytes_per_sec": 0 00:12:14.113 }, 00:12:14.113 "claimed": false, 00:12:14.113 "zoned": false, 00:12:14.113 "supported_io_types": { 00:12:14.113 "read": true, 00:12:14.113 "write": true, 00:12:14.113 "unmap": true, 00:12:14.113 "flush": true, 00:12:14.113 "reset": true, 00:12:14.113 "nvme_admin": false, 00:12:14.113 "nvme_io": false, 00:12:14.113 "nvme_io_md": false, 00:12:14.113 "write_zeroes": true, 00:12:14.113 "zcopy": true, 00:12:14.113 "get_zone_info": false, 00:12:14.113 "zone_management": false, 00:12:14.113 "zone_append": false, 00:12:14.113 "compare": false, 00:12:14.113 "compare_and_write": false, 00:12:14.113 "abort": true, 00:12:14.113 "seek_hole": false, 00:12:14.113 "seek_data": false, 00:12:14.113 "copy": true, 00:12:14.113 "nvme_iov_md": false 00:12:14.113 }, 00:12:14.113 "memory_domains": [ 00:12:14.113 { 00:12:14.113 "dma_device_id": "system", 00:12:14.113 "dma_device_type": 1 00:12:14.113 }, 00:12:14.113 { 00:12:14.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.113 "dma_device_type": 2 00:12:14.113 } 00:12:14.113 ], 00:12:14.113 "driver_specific": {} 00:12:14.113 } 00:12:14.113 ] 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.113 BaseBdev4 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.113 [ 00:12:14.113 { 00:12:14.113 "name": "BaseBdev4", 00:12:14.113 "aliases": [ 00:12:14.113 "ffaf5491-721f-482b-9f44-4318d68a471e" 00:12:14.113 ], 00:12:14.113 "product_name": "Malloc disk", 00:12:14.113 "block_size": 512, 00:12:14.113 "num_blocks": 65536, 00:12:14.113 "uuid": "ffaf5491-721f-482b-9f44-4318d68a471e", 00:12:14.113 "assigned_rate_limits": { 00:12:14.113 "rw_ios_per_sec": 0, 00:12:14.113 "rw_mbytes_per_sec": 0, 00:12:14.113 "r_mbytes_per_sec": 0, 00:12:14.113 "w_mbytes_per_sec": 0 00:12:14.113 }, 00:12:14.113 "claimed": false, 00:12:14.113 "zoned": false, 00:12:14.113 "supported_io_types": { 00:12:14.113 "read": true, 00:12:14.113 "write": true, 00:12:14.113 "unmap": true, 00:12:14.113 "flush": true, 00:12:14.113 "reset": true, 00:12:14.113 "nvme_admin": false, 00:12:14.113 "nvme_io": false, 00:12:14.113 "nvme_io_md": false, 00:12:14.113 "write_zeroes": true, 00:12:14.113 "zcopy": true, 00:12:14.113 "get_zone_info": false, 00:12:14.113 "zone_management": false, 00:12:14.113 "zone_append": false, 00:12:14.113 "compare": false, 00:12:14.113 "compare_and_write": false, 00:12:14.113 "abort": true, 00:12:14.113 "seek_hole": false, 00:12:14.113 "seek_data": false, 00:12:14.113 "copy": true, 00:12:14.113 "nvme_iov_md": false 00:12:14.113 }, 00:12:14.113 "memory_domains": [ 00:12:14.113 { 00:12:14.113 "dma_device_id": "system", 00:12:14.113 "dma_device_type": 1 00:12:14.113 }, 00:12:14.113 { 00:12:14.113 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:14.113 "dma_device_type": 2 00:12:14.113 } 00:12:14.113 ], 00:12:14.113 "driver_specific": {} 00:12:14.113 } 00:12:14.113 ] 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.113 [2024-11-27 14:11:51.289879] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:14.113 [2024-11-27 14:11:51.289966] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:14.113 [2024-11-27 14:11:51.290001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:14.113 [2024-11-27 14:11:51.292573] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:14.113 [2024-11-27 14:11:51.292640] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:14.113 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.114 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.114 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:14.114 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.114 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.114 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.114 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.114 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.114 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.114 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.114 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.114 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.114 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.114 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.114 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.114 "name": "Existed_Raid", 00:12:14.114 "uuid": "766c14ae-a618-4d72-9260-3e800f4fd7d2", 00:12:14.114 "strip_size_kb": 64, 00:12:14.114 "state": "configuring", 00:12:14.114 "raid_level": "raid0", 00:12:14.114 "superblock": true, 00:12:14.114 "num_base_bdevs": 4, 00:12:14.114 "num_base_bdevs_discovered": 3, 00:12:14.114 "num_base_bdevs_operational": 4, 00:12:14.114 "base_bdevs_list": [ 00:12:14.114 { 00:12:14.114 "name": "BaseBdev1", 00:12:14.114 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.114 "is_configured": false, 00:12:14.114 "data_offset": 0, 00:12:14.114 "data_size": 0 00:12:14.114 }, 00:12:14.114 { 00:12:14.114 "name": "BaseBdev2", 00:12:14.114 "uuid": "dc59ab37-374e-411a-8c69-ebf720a3852e", 00:12:14.114 "is_configured": true, 00:12:14.114 "data_offset": 2048, 00:12:14.114 "data_size": 63488 00:12:14.114 }, 00:12:14.114 { 00:12:14.114 "name": "BaseBdev3", 00:12:14.114 "uuid": "0aeee6fc-e668-469f-8aae-52eaf42c0dc5", 00:12:14.114 "is_configured": true, 00:12:14.114 "data_offset": 2048, 00:12:14.114 "data_size": 63488 00:12:14.114 }, 00:12:14.114 { 00:12:14.114 "name": "BaseBdev4", 00:12:14.114 "uuid": "ffaf5491-721f-482b-9f44-4318d68a471e", 00:12:14.114 "is_configured": true, 00:12:14.114 "data_offset": 2048, 00:12:14.114 "data_size": 63488 00:12:14.114 } 00:12:14.114 ] 00:12:14.114 }' 00:12:14.114 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.114 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.680 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:14.680 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.680 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.681 [2024-11-27 14:11:51.802065] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:14.681 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.681 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:14.681 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:14.681 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:14.681 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:14.681 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:14.681 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:14.681 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:14.681 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:14.681 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:14.681 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:14.681 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:14.681 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.681 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:14.681 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.681 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.681 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:14.681 "name": "Existed_Raid", 00:12:14.681 "uuid": "766c14ae-a618-4d72-9260-3e800f4fd7d2", 00:12:14.681 "strip_size_kb": 64, 00:12:14.681 "state": "configuring", 00:12:14.681 "raid_level": "raid0", 00:12:14.681 "superblock": true, 00:12:14.681 "num_base_bdevs": 4, 00:12:14.681 "num_base_bdevs_discovered": 2, 00:12:14.681 "num_base_bdevs_operational": 4, 00:12:14.681 "base_bdevs_list": [ 00:12:14.681 { 00:12:14.681 "name": "BaseBdev1", 00:12:14.681 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:14.681 "is_configured": false, 00:12:14.681 "data_offset": 0, 00:12:14.681 "data_size": 0 00:12:14.681 }, 00:12:14.681 { 00:12:14.681 "name": null, 00:12:14.681 "uuid": "dc59ab37-374e-411a-8c69-ebf720a3852e", 00:12:14.681 "is_configured": false, 00:12:14.681 "data_offset": 0, 00:12:14.681 "data_size": 63488 00:12:14.681 }, 00:12:14.681 { 00:12:14.681 "name": "BaseBdev3", 00:12:14.681 "uuid": "0aeee6fc-e668-469f-8aae-52eaf42c0dc5", 00:12:14.681 "is_configured": true, 00:12:14.681 "data_offset": 2048, 00:12:14.681 "data_size": 63488 00:12:14.681 }, 00:12:14.681 { 00:12:14.681 "name": "BaseBdev4", 00:12:14.681 "uuid": "ffaf5491-721f-482b-9f44-4318d68a471e", 00:12:14.681 "is_configured": true, 00:12:14.681 "data_offset": 2048, 00:12:14.681 "data_size": 63488 00:12:14.681 } 00:12:14.681 ] 00:12:14.681 }' 00:12:14.681 14:11:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:14.681 14:11:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.248 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.248 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.248 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.248 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:15.248 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.248 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:15.248 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:15.248 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.248 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.248 [2024-11-27 14:11:52.457803] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:15.248 BaseBdev1 00:12:15.248 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.248 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:15.248 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:15.248 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:15.248 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:15.248 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:15.248 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:15.248 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:15.248 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.248 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.248 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.248 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:15.248 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.248 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.248 [ 00:12:15.248 { 00:12:15.248 "name": "BaseBdev1", 00:12:15.248 "aliases": [ 00:12:15.248 "e36b7272-6479-4c59-bc2b-f3a79279350d" 00:12:15.248 ], 00:12:15.248 "product_name": "Malloc disk", 00:12:15.248 "block_size": 512, 00:12:15.248 "num_blocks": 65536, 00:12:15.248 "uuid": "e36b7272-6479-4c59-bc2b-f3a79279350d", 00:12:15.248 "assigned_rate_limits": { 00:12:15.248 "rw_ios_per_sec": 0, 00:12:15.248 "rw_mbytes_per_sec": 0, 00:12:15.248 "r_mbytes_per_sec": 0, 00:12:15.248 "w_mbytes_per_sec": 0 00:12:15.248 }, 00:12:15.248 "claimed": true, 00:12:15.248 "claim_type": "exclusive_write", 00:12:15.248 "zoned": false, 00:12:15.248 "supported_io_types": { 00:12:15.248 "read": true, 00:12:15.248 "write": true, 00:12:15.248 "unmap": true, 00:12:15.248 "flush": true, 00:12:15.248 "reset": true, 00:12:15.248 "nvme_admin": false, 00:12:15.248 "nvme_io": false, 00:12:15.248 "nvme_io_md": false, 00:12:15.248 "write_zeroes": true, 00:12:15.248 "zcopy": true, 00:12:15.248 "get_zone_info": false, 00:12:15.248 "zone_management": false, 00:12:15.248 "zone_append": false, 00:12:15.248 "compare": false, 00:12:15.249 "compare_and_write": false, 00:12:15.249 "abort": true, 00:12:15.249 "seek_hole": false, 00:12:15.249 "seek_data": false, 00:12:15.249 "copy": true, 00:12:15.249 "nvme_iov_md": false 00:12:15.249 }, 00:12:15.249 "memory_domains": [ 00:12:15.249 { 00:12:15.249 "dma_device_id": "system", 00:12:15.249 "dma_device_type": 1 00:12:15.249 }, 00:12:15.249 { 00:12:15.249 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:15.249 "dma_device_type": 2 00:12:15.249 } 00:12:15.249 ], 00:12:15.249 "driver_specific": {} 00:12:15.249 } 00:12:15.249 ] 00:12:15.249 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.249 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:15.249 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:15.249 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:15.249 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:15.249 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:15.249 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:15.249 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:15.249 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:15.249 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:15.249 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:15.249 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:15.249 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:15.249 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.249 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.249 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.249 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.508 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:15.508 "name": "Existed_Raid", 00:12:15.508 "uuid": "766c14ae-a618-4d72-9260-3e800f4fd7d2", 00:12:15.508 "strip_size_kb": 64, 00:12:15.508 "state": "configuring", 00:12:15.508 "raid_level": "raid0", 00:12:15.508 "superblock": true, 00:12:15.508 "num_base_bdevs": 4, 00:12:15.508 "num_base_bdevs_discovered": 3, 00:12:15.508 "num_base_bdevs_operational": 4, 00:12:15.508 "base_bdevs_list": [ 00:12:15.508 { 00:12:15.508 "name": "BaseBdev1", 00:12:15.508 "uuid": "e36b7272-6479-4c59-bc2b-f3a79279350d", 00:12:15.508 "is_configured": true, 00:12:15.508 "data_offset": 2048, 00:12:15.508 "data_size": 63488 00:12:15.508 }, 00:12:15.508 { 00:12:15.508 "name": null, 00:12:15.508 "uuid": "dc59ab37-374e-411a-8c69-ebf720a3852e", 00:12:15.508 "is_configured": false, 00:12:15.508 "data_offset": 0, 00:12:15.508 "data_size": 63488 00:12:15.508 }, 00:12:15.508 { 00:12:15.508 "name": "BaseBdev3", 00:12:15.508 "uuid": "0aeee6fc-e668-469f-8aae-52eaf42c0dc5", 00:12:15.508 "is_configured": true, 00:12:15.508 "data_offset": 2048, 00:12:15.508 "data_size": 63488 00:12:15.508 }, 00:12:15.508 { 00:12:15.508 "name": "BaseBdev4", 00:12:15.508 "uuid": "ffaf5491-721f-482b-9f44-4318d68a471e", 00:12:15.508 "is_configured": true, 00:12:15.508 "data_offset": 2048, 00:12:15.508 "data_size": 63488 00:12:15.508 } 00:12:15.508 ] 00:12:15.508 }' 00:12:15.508 14:11:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:15.508 14:11:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.767 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:15.767 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.767 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.767 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:16.026 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.026 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:16.026 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:16.026 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.026 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.026 [2024-11-27 14:11:53.094197] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:16.026 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.026 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:16.026 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.026 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.026 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:16.026 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.026 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.026 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.026 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.026 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.026 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.026 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.026 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.026 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.026 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.026 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.026 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.026 "name": "Existed_Raid", 00:12:16.026 "uuid": "766c14ae-a618-4d72-9260-3e800f4fd7d2", 00:12:16.026 "strip_size_kb": 64, 00:12:16.026 "state": "configuring", 00:12:16.026 "raid_level": "raid0", 00:12:16.026 "superblock": true, 00:12:16.026 "num_base_bdevs": 4, 00:12:16.026 "num_base_bdevs_discovered": 2, 00:12:16.026 "num_base_bdevs_operational": 4, 00:12:16.026 "base_bdevs_list": [ 00:12:16.026 { 00:12:16.026 "name": "BaseBdev1", 00:12:16.026 "uuid": "e36b7272-6479-4c59-bc2b-f3a79279350d", 00:12:16.026 "is_configured": true, 00:12:16.026 "data_offset": 2048, 00:12:16.026 "data_size": 63488 00:12:16.026 }, 00:12:16.026 { 00:12:16.026 "name": null, 00:12:16.026 "uuid": "dc59ab37-374e-411a-8c69-ebf720a3852e", 00:12:16.026 "is_configured": false, 00:12:16.026 "data_offset": 0, 00:12:16.026 "data_size": 63488 00:12:16.026 }, 00:12:16.026 { 00:12:16.026 "name": null, 00:12:16.026 "uuid": "0aeee6fc-e668-469f-8aae-52eaf42c0dc5", 00:12:16.026 "is_configured": false, 00:12:16.026 "data_offset": 0, 00:12:16.026 "data_size": 63488 00:12:16.026 }, 00:12:16.026 { 00:12:16.026 "name": "BaseBdev4", 00:12:16.026 "uuid": "ffaf5491-721f-482b-9f44-4318d68a471e", 00:12:16.026 "is_configured": true, 00:12:16.026 "data_offset": 2048, 00:12:16.026 "data_size": 63488 00:12:16.026 } 00:12:16.026 ] 00:12:16.026 }' 00:12:16.026 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.026 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.594 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.594 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.594 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.594 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:16.594 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.594 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:16.594 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:16.594 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.594 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.594 [2024-11-27 14:11:53.690293] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:16.594 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.594 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:16.594 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:16.594 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:16.594 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:16.594 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:16.594 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:16.594 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:16.594 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:16.594 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:16.594 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:16.594 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:16.594 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.594 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.594 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.594 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.594 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:16.594 "name": "Existed_Raid", 00:12:16.594 "uuid": "766c14ae-a618-4d72-9260-3e800f4fd7d2", 00:12:16.594 "strip_size_kb": 64, 00:12:16.594 "state": "configuring", 00:12:16.594 "raid_level": "raid0", 00:12:16.594 "superblock": true, 00:12:16.594 "num_base_bdevs": 4, 00:12:16.594 "num_base_bdevs_discovered": 3, 00:12:16.594 "num_base_bdevs_operational": 4, 00:12:16.594 "base_bdevs_list": [ 00:12:16.594 { 00:12:16.594 "name": "BaseBdev1", 00:12:16.594 "uuid": "e36b7272-6479-4c59-bc2b-f3a79279350d", 00:12:16.594 "is_configured": true, 00:12:16.594 "data_offset": 2048, 00:12:16.594 "data_size": 63488 00:12:16.594 }, 00:12:16.594 { 00:12:16.594 "name": null, 00:12:16.594 "uuid": "dc59ab37-374e-411a-8c69-ebf720a3852e", 00:12:16.594 "is_configured": false, 00:12:16.594 "data_offset": 0, 00:12:16.594 "data_size": 63488 00:12:16.594 }, 00:12:16.594 { 00:12:16.594 "name": "BaseBdev3", 00:12:16.594 "uuid": "0aeee6fc-e668-469f-8aae-52eaf42c0dc5", 00:12:16.594 "is_configured": true, 00:12:16.594 "data_offset": 2048, 00:12:16.594 "data_size": 63488 00:12:16.594 }, 00:12:16.594 { 00:12:16.594 "name": "BaseBdev4", 00:12:16.594 "uuid": "ffaf5491-721f-482b-9f44-4318d68a471e", 00:12:16.594 "is_configured": true, 00:12:16.594 "data_offset": 2048, 00:12:16.594 "data_size": 63488 00:12:16.594 } 00:12:16.594 ] 00:12:16.594 }' 00:12:16.594 14:11:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:16.594 14:11:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.162 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.162 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:17.162 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.162 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.162 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.162 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:17.162 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:17.162 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.162 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.162 [2024-11-27 14:11:54.246556] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:17.162 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.162 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:17.162 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.162 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.162 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:17.162 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.162 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.162 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.162 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.162 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.162 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.162 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.162 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.162 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.162 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.162 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.162 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.162 "name": "Existed_Raid", 00:12:17.162 "uuid": "766c14ae-a618-4d72-9260-3e800f4fd7d2", 00:12:17.162 "strip_size_kb": 64, 00:12:17.162 "state": "configuring", 00:12:17.162 "raid_level": "raid0", 00:12:17.162 "superblock": true, 00:12:17.162 "num_base_bdevs": 4, 00:12:17.162 "num_base_bdevs_discovered": 2, 00:12:17.162 "num_base_bdevs_operational": 4, 00:12:17.162 "base_bdevs_list": [ 00:12:17.162 { 00:12:17.162 "name": null, 00:12:17.162 "uuid": "e36b7272-6479-4c59-bc2b-f3a79279350d", 00:12:17.162 "is_configured": false, 00:12:17.162 "data_offset": 0, 00:12:17.162 "data_size": 63488 00:12:17.162 }, 00:12:17.162 { 00:12:17.162 "name": null, 00:12:17.162 "uuid": "dc59ab37-374e-411a-8c69-ebf720a3852e", 00:12:17.162 "is_configured": false, 00:12:17.162 "data_offset": 0, 00:12:17.162 "data_size": 63488 00:12:17.162 }, 00:12:17.162 { 00:12:17.162 "name": "BaseBdev3", 00:12:17.162 "uuid": "0aeee6fc-e668-469f-8aae-52eaf42c0dc5", 00:12:17.162 "is_configured": true, 00:12:17.162 "data_offset": 2048, 00:12:17.162 "data_size": 63488 00:12:17.162 }, 00:12:17.162 { 00:12:17.162 "name": "BaseBdev4", 00:12:17.162 "uuid": "ffaf5491-721f-482b-9f44-4318d68a471e", 00:12:17.162 "is_configured": true, 00:12:17.162 "data_offset": 2048, 00:12:17.162 "data_size": 63488 00:12:17.162 } 00:12:17.162 ] 00:12:17.162 }' 00:12:17.162 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.162 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.730 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.730 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.730 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:17.730 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.730 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.730 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:17.730 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:17.730 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.730 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.730 [2024-11-27 14:11:54.894168] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:17.730 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.730 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:17.730 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:17.730 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:17.730 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:17.730 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:17.730 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:17.730 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:17.730 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:17.730 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:17.730 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:17.730 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:17.730 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:17.730 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:17.730 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:17.730 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:17.730 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:17.730 "name": "Existed_Raid", 00:12:17.730 "uuid": "766c14ae-a618-4d72-9260-3e800f4fd7d2", 00:12:17.730 "strip_size_kb": 64, 00:12:17.730 "state": "configuring", 00:12:17.730 "raid_level": "raid0", 00:12:17.730 "superblock": true, 00:12:17.730 "num_base_bdevs": 4, 00:12:17.730 "num_base_bdevs_discovered": 3, 00:12:17.730 "num_base_bdevs_operational": 4, 00:12:17.730 "base_bdevs_list": [ 00:12:17.730 { 00:12:17.730 "name": null, 00:12:17.730 "uuid": "e36b7272-6479-4c59-bc2b-f3a79279350d", 00:12:17.730 "is_configured": false, 00:12:17.730 "data_offset": 0, 00:12:17.730 "data_size": 63488 00:12:17.730 }, 00:12:17.730 { 00:12:17.730 "name": "BaseBdev2", 00:12:17.730 "uuid": "dc59ab37-374e-411a-8c69-ebf720a3852e", 00:12:17.730 "is_configured": true, 00:12:17.730 "data_offset": 2048, 00:12:17.730 "data_size": 63488 00:12:17.730 }, 00:12:17.730 { 00:12:17.730 "name": "BaseBdev3", 00:12:17.730 "uuid": "0aeee6fc-e668-469f-8aae-52eaf42c0dc5", 00:12:17.730 "is_configured": true, 00:12:17.730 "data_offset": 2048, 00:12:17.730 "data_size": 63488 00:12:17.730 }, 00:12:17.730 { 00:12:17.730 "name": "BaseBdev4", 00:12:17.730 "uuid": "ffaf5491-721f-482b-9f44-4318d68a471e", 00:12:17.730 "is_configured": true, 00:12:17.730 "data_offset": 2048, 00:12:17.730 "data_size": 63488 00:12:17.730 } 00:12:17.730 ] 00:12:17.730 }' 00:12:17.730 14:11:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:17.730 14:11:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e36b7272-6479-4c59-bc2b-f3a79279350d 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.297 [2024-11-27 14:11:55.537008] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:18.297 [2024-11-27 14:11:55.537333] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:18.297 [2024-11-27 14:11:55.537351] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:18.297 NewBaseBdev 00:12:18.297 [2024-11-27 14:11:55.537674] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:18.297 [2024-11-27 14:11:55.537864] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:18.297 [2024-11-27 14:11:55.537886] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:18.297 [2024-11-27 14:11:55.538046] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.297 [ 00:12:18.297 { 00:12:18.297 "name": "NewBaseBdev", 00:12:18.297 "aliases": [ 00:12:18.297 "e36b7272-6479-4c59-bc2b-f3a79279350d" 00:12:18.297 ], 00:12:18.297 "product_name": "Malloc disk", 00:12:18.297 "block_size": 512, 00:12:18.297 "num_blocks": 65536, 00:12:18.297 "uuid": "e36b7272-6479-4c59-bc2b-f3a79279350d", 00:12:18.297 "assigned_rate_limits": { 00:12:18.297 "rw_ios_per_sec": 0, 00:12:18.297 "rw_mbytes_per_sec": 0, 00:12:18.297 "r_mbytes_per_sec": 0, 00:12:18.297 "w_mbytes_per_sec": 0 00:12:18.297 }, 00:12:18.297 "claimed": true, 00:12:18.297 "claim_type": "exclusive_write", 00:12:18.297 "zoned": false, 00:12:18.297 "supported_io_types": { 00:12:18.297 "read": true, 00:12:18.297 "write": true, 00:12:18.297 "unmap": true, 00:12:18.297 "flush": true, 00:12:18.297 "reset": true, 00:12:18.297 "nvme_admin": false, 00:12:18.297 "nvme_io": false, 00:12:18.297 "nvme_io_md": false, 00:12:18.297 "write_zeroes": true, 00:12:18.297 "zcopy": true, 00:12:18.297 "get_zone_info": false, 00:12:18.297 "zone_management": false, 00:12:18.297 "zone_append": false, 00:12:18.297 "compare": false, 00:12:18.297 "compare_and_write": false, 00:12:18.297 "abort": true, 00:12:18.297 "seek_hole": false, 00:12:18.297 "seek_data": false, 00:12:18.297 "copy": true, 00:12:18.297 "nvme_iov_md": false 00:12:18.297 }, 00:12:18.297 "memory_domains": [ 00:12:18.297 { 00:12:18.297 "dma_device_id": "system", 00:12:18.297 "dma_device_type": 1 00:12:18.297 }, 00:12:18.297 { 00:12:18.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.297 "dma_device_type": 2 00:12:18.297 } 00:12:18.297 ], 00:12:18.297 "driver_specific": {} 00:12:18.297 } 00:12:18.297 ] 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:18.297 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:18.556 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:18.556 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:18.556 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.556 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:18.556 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:18.556 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:18.556 "name": "Existed_Raid", 00:12:18.556 "uuid": "766c14ae-a618-4d72-9260-3e800f4fd7d2", 00:12:18.556 "strip_size_kb": 64, 00:12:18.556 "state": "online", 00:12:18.556 "raid_level": "raid0", 00:12:18.556 "superblock": true, 00:12:18.556 "num_base_bdevs": 4, 00:12:18.556 "num_base_bdevs_discovered": 4, 00:12:18.556 "num_base_bdevs_operational": 4, 00:12:18.556 "base_bdevs_list": [ 00:12:18.556 { 00:12:18.556 "name": "NewBaseBdev", 00:12:18.556 "uuid": "e36b7272-6479-4c59-bc2b-f3a79279350d", 00:12:18.556 "is_configured": true, 00:12:18.556 "data_offset": 2048, 00:12:18.556 "data_size": 63488 00:12:18.556 }, 00:12:18.556 { 00:12:18.556 "name": "BaseBdev2", 00:12:18.556 "uuid": "dc59ab37-374e-411a-8c69-ebf720a3852e", 00:12:18.556 "is_configured": true, 00:12:18.556 "data_offset": 2048, 00:12:18.556 "data_size": 63488 00:12:18.556 }, 00:12:18.556 { 00:12:18.556 "name": "BaseBdev3", 00:12:18.556 "uuid": "0aeee6fc-e668-469f-8aae-52eaf42c0dc5", 00:12:18.556 "is_configured": true, 00:12:18.556 "data_offset": 2048, 00:12:18.556 "data_size": 63488 00:12:18.556 }, 00:12:18.556 { 00:12:18.556 "name": "BaseBdev4", 00:12:18.556 "uuid": "ffaf5491-721f-482b-9f44-4318d68a471e", 00:12:18.556 "is_configured": true, 00:12:18.556 "data_offset": 2048, 00:12:18.556 "data_size": 63488 00:12:18.556 } 00:12:18.556 ] 00:12:18.556 }' 00:12:18.556 14:11:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:18.556 14:11:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.122 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:19.122 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:19.122 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:19.122 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:19.122 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:19.122 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:19.122 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:19.122 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.123 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.123 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:19.123 [2024-11-27 14:11:56.101715] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:19.123 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.123 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:19.123 "name": "Existed_Raid", 00:12:19.123 "aliases": [ 00:12:19.123 "766c14ae-a618-4d72-9260-3e800f4fd7d2" 00:12:19.123 ], 00:12:19.123 "product_name": "Raid Volume", 00:12:19.123 "block_size": 512, 00:12:19.123 "num_blocks": 253952, 00:12:19.123 "uuid": "766c14ae-a618-4d72-9260-3e800f4fd7d2", 00:12:19.123 "assigned_rate_limits": { 00:12:19.123 "rw_ios_per_sec": 0, 00:12:19.123 "rw_mbytes_per_sec": 0, 00:12:19.123 "r_mbytes_per_sec": 0, 00:12:19.123 "w_mbytes_per_sec": 0 00:12:19.123 }, 00:12:19.123 "claimed": false, 00:12:19.123 "zoned": false, 00:12:19.123 "supported_io_types": { 00:12:19.123 "read": true, 00:12:19.123 "write": true, 00:12:19.123 "unmap": true, 00:12:19.123 "flush": true, 00:12:19.123 "reset": true, 00:12:19.123 "nvme_admin": false, 00:12:19.123 "nvme_io": false, 00:12:19.123 "nvme_io_md": false, 00:12:19.123 "write_zeroes": true, 00:12:19.123 "zcopy": false, 00:12:19.123 "get_zone_info": false, 00:12:19.123 "zone_management": false, 00:12:19.123 "zone_append": false, 00:12:19.123 "compare": false, 00:12:19.123 "compare_and_write": false, 00:12:19.123 "abort": false, 00:12:19.123 "seek_hole": false, 00:12:19.123 "seek_data": false, 00:12:19.123 "copy": false, 00:12:19.123 "nvme_iov_md": false 00:12:19.123 }, 00:12:19.123 "memory_domains": [ 00:12:19.123 { 00:12:19.123 "dma_device_id": "system", 00:12:19.123 "dma_device_type": 1 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.123 "dma_device_type": 2 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "dma_device_id": "system", 00:12:19.123 "dma_device_type": 1 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.123 "dma_device_type": 2 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "dma_device_id": "system", 00:12:19.123 "dma_device_type": 1 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.123 "dma_device_type": 2 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "dma_device_id": "system", 00:12:19.123 "dma_device_type": 1 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:19.123 "dma_device_type": 2 00:12:19.123 } 00:12:19.123 ], 00:12:19.123 "driver_specific": { 00:12:19.123 "raid": { 00:12:19.123 "uuid": "766c14ae-a618-4d72-9260-3e800f4fd7d2", 00:12:19.123 "strip_size_kb": 64, 00:12:19.123 "state": "online", 00:12:19.123 "raid_level": "raid0", 00:12:19.123 "superblock": true, 00:12:19.123 "num_base_bdevs": 4, 00:12:19.123 "num_base_bdevs_discovered": 4, 00:12:19.123 "num_base_bdevs_operational": 4, 00:12:19.123 "base_bdevs_list": [ 00:12:19.123 { 00:12:19.123 "name": "NewBaseBdev", 00:12:19.123 "uuid": "e36b7272-6479-4c59-bc2b-f3a79279350d", 00:12:19.123 "is_configured": true, 00:12:19.123 "data_offset": 2048, 00:12:19.123 "data_size": 63488 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "name": "BaseBdev2", 00:12:19.123 "uuid": "dc59ab37-374e-411a-8c69-ebf720a3852e", 00:12:19.123 "is_configured": true, 00:12:19.123 "data_offset": 2048, 00:12:19.123 "data_size": 63488 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "name": "BaseBdev3", 00:12:19.123 "uuid": "0aeee6fc-e668-469f-8aae-52eaf42c0dc5", 00:12:19.123 "is_configured": true, 00:12:19.123 "data_offset": 2048, 00:12:19.123 "data_size": 63488 00:12:19.123 }, 00:12:19.123 { 00:12:19.123 "name": "BaseBdev4", 00:12:19.123 "uuid": "ffaf5491-721f-482b-9f44-4318d68a471e", 00:12:19.123 "is_configured": true, 00:12:19.123 "data_offset": 2048, 00:12:19.123 "data_size": 63488 00:12:19.123 } 00:12:19.123 ] 00:12:19.123 } 00:12:19.123 } 00:12:19.123 }' 00:12:19.123 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:19.123 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:19.123 BaseBdev2 00:12:19.123 BaseBdev3 00:12:19.123 BaseBdev4' 00:12:19.123 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.123 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:19.123 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:19.123 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:19.123 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.123 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.123 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.123 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.123 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.123 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.123 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:19.123 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:19.123 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.123 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.123 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.123 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.123 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.123 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.123 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:19.123 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:19.123 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.123 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.123 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.123 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.382 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.382 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.382 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:19.382 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:19.382 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:19.382 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.382 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.382 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.382 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:19.382 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:19.382 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:19.382 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.382 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:19.382 [2024-11-27 14:11:56.481470] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:19.382 [2024-11-27 14:11:56.481507] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:19.382 [2024-11-27 14:11:56.481600] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:19.382 [2024-11-27 14:11:56.481698] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:19.382 [2024-11-27 14:11:56.481714] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:19.382 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.382 14:11:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 70051 00:12:19.382 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 70051 ']' 00:12:19.382 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 70051 00:12:19.382 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:12:19.382 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:19.382 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70051 00:12:19.382 killing process with pid 70051 00:12:19.382 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:19.382 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:19.382 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70051' 00:12:19.382 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 70051 00:12:19.382 [2024-11-27 14:11:56.521855] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:19.382 14:11:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 70051 00:12:19.641 [2024-11-27 14:11:56.895640] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:21.018 14:11:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:12:21.018 00:12:21.018 real 0m13.001s 00:12:21.018 user 0m21.503s 00:12:21.018 sys 0m1.794s 00:12:21.018 14:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:21.018 ************************************ 00:12:21.018 END TEST raid_state_function_test_sb 00:12:21.018 ************************************ 00:12:21.018 14:11:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:21.018 14:11:58 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:12:21.018 14:11:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:21.018 14:11:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:21.018 14:11:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:21.018 ************************************ 00:12:21.018 START TEST raid_superblock_test 00:12:21.018 ************************************ 00:12:21.018 14:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid0 4 00:12:21.018 14:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid0 00:12:21.018 14:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:12:21.018 14:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:12:21.018 14:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:12:21.018 14:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:12:21.018 14:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:12:21.018 14:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:12:21.018 14:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:12:21.018 14:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:12:21.018 14:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:12:21.018 14:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:12:21.018 14:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:12:21.018 14:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:12:21.018 14:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid0 '!=' raid1 ']' 00:12:21.018 14:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:12:21.018 14:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:12:21.018 14:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=70733 00:12:21.018 14:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:12:21.018 14:11:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 70733 00:12:21.018 14:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 70733 ']' 00:12:21.018 14:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.018 14:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:21.018 14:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.018 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.018 14:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:21.018 14:11:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:21.018 [2024-11-27 14:11:58.141753] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:12:21.018 [2024-11-27 14:11:58.142147] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70733 ] 00:12:21.276 [2024-11-27 14:11:58.335116] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:21.276 [2024-11-27 14:11:58.490816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.534 [2024-11-27 14:11:58.701486] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:21.534 [2024-11-27 14:11:58.701560] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.109 malloc1 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.109 [2024-11-27 14:11:59.142417] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:22.109 [2024-11-27 14:11:59.142506] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.109 [2024-11-27 14:11:59.142539] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:12:22.109 [2024-11-27 14:11:59.142554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.109 [2024-11-27 14:11:59.145496] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.109 [2024-11-27 14:11:59.145563] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:22.109 pt1 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.109 malloc2 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.109 [2024-11-27 14:11:59.199001] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:22.109 [2024-11-27 14:11:59.199202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.109 [2024-11-27 14:11:59.199287] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:12:22.109 [2024-11-27 14:11:59.199409] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.109 [2024-11-27 14:11:59.202217] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.109 [2024-11-27 14:11:59.202379] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:22.109 pt2 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.109 malloc3 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.109 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.109 [2024-11-27 14:11:59.260290] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:22.109 [2024-11-27 14:11:59.260369] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.109 [2024-11-27 14:11:59.260401] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:12:22.109 [2024-11-27 14:11:59.260416] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.109 [2024-11-27 14:11:59.263311] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.110 [2024-11-27 14:11:59.263409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:22.110 pt3 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.110 malloc4 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.110 [2024-11-27 14:11:59.312268] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:22.110 [2024-11-27 14:11:59.312355] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:22.110 [2024-11-27 14:11:59.312385] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:12:22.110 [2024-11-27 14:11:59.312399] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:22.110 [2024-11-27 14:11:59.315230] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:22.110 [2024-11-27 14:11:59.315412] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:22.110 pt4 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.110 [2024-11-27 14:11:59.320364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:22.110 [2024-11-27 14:11:59.322758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:22.110 [2024-11-27 14:11:59.322901] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:22.110 [2024-11-27 14:11:59.322976] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:22.110 [2024-11-27 14:11:59.323231] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:12:22.110 [2024-11-27 14:11:59.323250] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:22.110 [2024-11-27 14:11:59.323578] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:22.110 [2024-11-27 14:11:59.323820] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:12:22.110 [2024-11-27 14:11:59.323843] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:12:22.110 [2024-11-27 14:11:59.324031] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:22.110 "name": "raid_bdev1", 00:12:22.110 "uuid": "9f58537c-f85b-404b-bb64-189ab046f5c8", 00:12:22.110 "strip_size_kb": 64, 00:12:22.110 "state": "online", 00:12:22.110 "raid_level": "raid0", 00:12:22.110 "superblock": true, 00:12:22.110 "num_base_bdevs": 4, 00:12:22.110 "num_base_bdevs_discovered": 4, 00:12:22.110 "num_base_bdevs_operational": 4, 00:12:22.110 "base_bdevs_list": [ 00:12:22.110 { 00:12:22.110 "name": "pt1", 00:12:22.110 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:22.110 "is_configured": true, 00:12:22.110 "data_offset": 2048, 00:12:22.110 "data_size": 63488 00:12:22.110 }, 00:12:22.110 { 00:12:22.110 "name": "pt2", 00:12:22.110 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:22.110 "is_configured": true, 00:12:22.110 "data_offset": 2048, 00:12:22.110 "data_size": 63488 00:12:22.110 }, 00:12:22.110 { 00:12:22.110 "name": "pt3", 00:12:22.110 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:22.110 "is_configured": true, 00:12:22.110 "data_offset": 2048, 00:12:22.110 "data_size": 63488 00:12:22.110 }, 00:12:22.110 { 00:12:22.110 "name": "pt4", 00:12:22.110 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:22.110 "is_configured": true, 00:12:22.110 "data_offset": 2048, 00:12:22.110 "data_size": 63488 00:12:22.110 } 00:12:22.110 ] 00:12:22.110 }' 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:22.110 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.683 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:12:22.683 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:22.683 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:22.683 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:22.683 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:22.683 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:22.683 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:22.683 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.683 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.683 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:22.683 [2024-11-27 14:11:59.852984] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:22.683 14:11:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.683 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:22.683 "name": "raid_bdev1", 00:12:22.683 "aliases": [ 00:12:22.683 "9f58537c-f85b-404b-bb64-189ab046f5c8" 00:12:22.683 ], 00:12:22.683 "product_name": "Raid Volume", 00:12:22.683 "block_size": 512, 00:12:22.683 "num_blocks": 253952, 00:12:22.683 "uuid": "9f58537c-f85b-404b-bb64-189ab046f5c8", 00:12:22.683 "assigned_rate_limits": { 00:12:22.683 "rw_ios_per_sec": 0, 00:12:22.683 "rw_mbytes_per_sec": 0, 00:12:22.683 "r_mbytes_per_sec": 0, 00:12:22.683 "w_mbytes_per_sec": 0 00:12:22.683 }, 00:12:22.683 "claimed": false, 00:12:22.683 "zoned": false, 00:12:22.683 "supported_io_types": { 00:12:22.683 "read": true, 00:12:22.683 "write": true, 00:12:22.683 "unmap": true, 00:12:22.683 "flush": true, 00:12:22.683 "reset": true, 00:12:22.683 "nvme_admin": false, 00:12:22.683 "nvme_io": false, 00:12:22.683 "nvme_io_md": false, 00:12:22.683 "write_zeroes": true, 00:12:22.683 "zcopy": false, 00:12:22.683 "get_zone_info": false, 00:12:22.683 "zone_management": false, 00:12:22.683 "zone_append": false, 00:12:22.683 "compare": false, 00:12:22.683 "compare_and_write": false, 00:12:22.683 "abort": false, 00:12:22.683 "seek_hole": false, 00:12:22.683 "seek_data": false, 00:12:22.683 "copy": false, 00:12:22.683 "nvme_iov_md": false 00:12:22.683 }, 00:12:22.683 "memory_domains": [ 00:12:22.683 { 00:12:22.683 "dma_device_id": "system", 00:12:22.683 "dma_device_type": 1 00:12:22.683 }, 00:12:22.683 { 00:12:22.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.683 "dma_device_type": 2 00:12:22.683 }, 00:12:22.683 { 00:12:22.683 "dma_device_id": "system", 00:12:22.683 "dma_device_type": 1 00:12:22.683 }, 00:12:22.683 { 00:12:22.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.683 "dma_device_type": 2 00:12:22.683 }, 00:12:22.683 { 00:12:22.683 "dma_device_id": "system", 00:12:22.683 "dma_device_type": 1 00:12:22.683 }, 00:12:22.683 { 00:12:22.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.683 "dma_device_type": 2 00:12:22.683 }, 00:12:22.683 { 00:12:22.683 "dma_device_id": "system", 00:12:22.683 "dma_device_type": 1 00:12:22.683 }, 00:12:22.683 { 00:12:22.683 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.683 "dma_device_type": 2 00:12:22.683 } 00:12:22.683 ], 00:12:22.683 "driver_specific": { 00:12:22.683 "raid": { 00:12:22.683 "uuid": "9f58537c-f85b-404b-bb64-189ab046f5c8", 00:12:22.683 "strip_size_kb": 64, 00:12:22.683 "state": "online", 00:12:22.683 "raid_level": "raid0", 00:12:22.683 "superblock": true, 00:12:22.683 "num_base_bdevs": 4, 00:12:22.683 "num_base_bdevs_discovered": 4, 00:12:22.683 "num_base_bdevs_operational": 4, 00:12:22.683 "base_bdevs_list": [ 00:12:22.683 { 00:12:22.683 "name": "pt1", 00:12:22.683 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:22.683 "is_configured": true, 00:12:22.683 "data_offset": 2048, 00:12:22.683 "data_size": 63488 00:12:22.683 }, 00:12:22.683 { 00:12:22.683 "name": "pt2", 00:12:22.684 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:22.684 "is_configured": true, 00:12:22.684 "data_offset": 2048, 00:12:22.684 "data_size": 63488 00:12:22.684 }, 00:12:22.684 { 00:12:22.684 "name": "pt3", 00:12:22.684 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:22.684 "is_configured": true, 00:12:22.684 "data_offset": 2048, 00:12:22.684 "data_size": 63488 00:12:22.684 }, 00:12:22.684 { 00:12:22.684 "name": "pt4", 00:12:22.684 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:22.684 "is_configured": true, 00:12:22.684 "data_offset": 2048, 00:12:22.684 "data_size": 63488 00:12:22.684 } 00:12:22.684 ] 00:12:22.684 } 00:12:22.684 } 00:12:22.684 }' 00:12:22.684 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:22.684 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:22.684 pt2 00:12:22.684 pt3 00:12:22.684 pt4' 00:12:22.684 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.941 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:22.941 14:11:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:22.941 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.199 [2024-11-27 14:12:00.221028] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=9f58537c-f85b-404b-bb64-189ab046f5c8 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 9f58537c-f85b-404b-bb64-189ab046f5c8 ']' 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.199 [2024-11-27 14:12:00.268643] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:23.199 [2024-11-27 14:12:00.268674] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:23.199 [2024-11-27 14:12:00.268769] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:23.199 [2024-11-27 14:12:00.268911] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:23.199 [2024-11-27 14:12:00.268938] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.199 [2024-11-27 14:12:00.424736] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:23.199 [2024-11-27 14:12:00.427544] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:23.199 [2024-11-27 14:12:00.427882] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:23.199 [2024-11-27 14:12:00.427960] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:12:23.199 [2024-11-27 14:12:00.428041] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:23.199 [2024-11-27 14:12:00.428119] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:23.199 [2024-11-27 14:12:00.428170] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:23.199 [2024-11-27 14:12:00.428202] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:12:23.199 [2024-11-27 14:12:00.428224] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:23.199 [2024-11-27 14:12:00.428243] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:12:23.199 request: 00:12:23.199 { 00:12:23.199 "name": "raid_bdev1", 00:12:23.199 "raid_level": "raid0", 00:12:23.199 "base_bdevs": [ 00:12:23.199 "malloc1", 00:12:23.199 "malloc2", 00:12:23.199 "malloc3", 00:12:23.199 "malloc4" 00:12:23.199 ], 00:12:23.199 "strip_size_kb": 64, 00:12:23.199 "superblock": false, 00:12:23.199 "method": "bdev_raid_create", 00:12:23.199 "req_id": 1 00:12:23.199 } 00:12:23.199 Got JSON-RPC error response 00:12:23.199 response: 00:12:23.199 { 00:12:23.199 "code": -17, 00:12:23.199 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:23.199 } 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.199 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.456 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:12:23.456 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:12:23.456 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:23.456 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.456 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.456 [2024-11-27 14:12:00.492856] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:23.456 [2024-11-27 14:12:00.493089] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:23.456 [2024-11-27 14:12:00.493252] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:23.456 [2024-11-27 14:12:00.493375] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:23.456 [2024-11-27 14:12:00.496747] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:23.456 [2024-11-27 14:12:00.496958] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:23.456 [2024-11-27 14:12:00.497180] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:23.456 [2024-11-27 14:12:00.497369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:23.456 pt1 00:12:23.456 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.456 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:12:23.456 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:23.456 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:23.456 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:23.457 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:23.457 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:23.457 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:23.457 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:23.457 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:23.457 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:23.457 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:23.457 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.457 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:23.457 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:23.457 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.457 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:23.457 "name": "raid_bdev1", 00:12:23.457 "uuid": "9f58537c-f85b-404b-bb64-189ab046f5c8", 00:12:23.457 "strip_size_kb": 64, 00:12:23.457 "state": "configuring", 00:12:23.457 "raid_level": "raid0", 00:12:23.457 "superblock": true, 00:12:23.457 "num_base_bdevs": 4, 00:12:23.457 "num_base_bdevs_discovered": 1, 00:12:23.457 "num_base_bdevs_operational": 4, 00:12:23.457 "base_bdevs_list": [ 00:12:23.457 { 00:12:23.457 "name": "pt1", 00:12:23.457 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:23.457 "is_configured": true, 00:12:23.457 "data_offset": 2048, 00:12:23.457 "data_size": 63488 00:12:23.457 }, 00:12:23.457 { 00:12:23.457 "name": null, 00:12:23.457 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:23.457 "is_configured": false, 00:12:23.457 "data_offset": 2048, 00:12:23.457 "data_size": 63488 00:12:23.457 }, 00:12:23.457 { 00:12:23.457 "name": null, 00:12:23.457 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:23.457 "is_configured": false, 00:12:23.457 "data_offset": 2048, 00:12:23.457 "data_size": 63488 00:12:23.457 }, 00:12:23.457 { 00:12:23.457 "name": null, 00:12:23.457 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:23.457 "is_configured": false, 00:12:23.457 "data_offset": 2048, 00:12:23.457 "data_size": 63488 00:12:23.457 } 00:12:23.457 ] 00:12:23.457 }' 00:12:23.457 14:12:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:23.457 14:12:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.024 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:12:24.024 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:24.024 14:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.024 14:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.024 [2024-11-27 14:12:01.013539] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:24.024 [2024-11-27 14:12:01.013647] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.024 [2024-11-27 14:12:01.013676] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:12:24.024 [2024-11-27 14:12:01.013693] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.024 [2024-11-27 14:12:01.014305] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.024 [2024-11-27 14:12:01.014344] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:24.024 [2024-11-27 14:12:01.014491] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:24.024 [2024-11-27 14:12:01.014548] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:24.024 pt2 00:12:24.024 14:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.024 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:12:24.024 14:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.024 14:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.024 [2024-11-27 14:12:01.021532] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:24.024 14:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.024 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:12:24.024 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.024 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:24.024 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:24.024 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:24.024 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.024 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.024 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.024 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.024 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.024 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.024 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.024 14:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.024 14:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.024 14:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.024 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.024 "name": "raid_bdev1", 00:12:24.024 "uuid": "9f58537c-f85b-404b-bb64-189ab046f5c8", 00:12:24.024 "strip_size_kb": 64, 00:12:24.024 "state": "configuring", 00:12:24.024 "raid_level": "raid0", 00:12:24.024 "superblock": true, 00:12:24.024 "num_base_bdevs": 4, 00:12:24.024 "num_base_bdevs_discovered": 1, 00:12:24.024 "num_base_bdevs_operational": 4, 00:12:24.024 "base_bdevs_list": [ 00:12:24.024 { 00:12:24.024 "name": "pt1", 00:12:24.024 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:24.024 "is_configured": true, 00:12:24.024 "data_offset": 2048, 00:12:24.024 "data_size": 63488 00:12:24.024 }, 00:12:24.024 { 00:12:24.024 "name": null, 00:12:24.024 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:24.024 "is_configured": false, 00:12:24.024 "data_offset": 0, 00:12:24.024 "data_size": 63488 00:12:24.024 }, 00:12:24.024 { 00:12:24.024 "name": null, 00:12:24.024 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:24.024 "is_configured": false, 00:12:24.024 "data_offset": 2048, 00:12:24.025 "data_size": 63488 00:12:24.025 }, 00:12:24.025 { 00:12:24.025 "name": null, 00:12:24.025 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:24.025 "is_configured": false, 00:12:24.025 "data_offset": 2048, 00:12:24.025 "data_size": 63488 00:12:24.025 } 00:12:24.025 ] 00:12:24.025 }' 00:12:24.025 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.025 14:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.283 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:12:24.283 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:24.283 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:24.283 14:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.283 14:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.283 [2024-11-27 14:12:01.549674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:24.283 [2024-11-27 14:12:01.549770] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.283 [2024-11-27 14:12:01.549849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:12:24.283 [2024-11-27 14:12:01.549866] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.283 [2024-11-27 14:12:01.550445] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.283 [2024-11-27 14:12:01.550470] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:24.283 [2024-11-27 14:12:01.550618] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:24.283 [2024-11-27 14:12:01.550653] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:24.283 pt2 00:12:24.283 14:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.283 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:24.283 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:24.283 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:24.283 14:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.283 14:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.283 [2024-11-27 14:12:01.557682] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:24.283 [2024-11-27 14:12:01.557763] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.283 [2024-11-27 14:12:01.557806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:12:24.283 [2024-11-27 14:12:01.557824] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.283 [2024-11-27 14:12:01.558367] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.283 [2024-11-27 14:12:01.558414] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:24.283 [2024-11-27 14:12:01.558513] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:24.283 [2024-11-27 14:12:01.558554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:24.541 pt3 00:12:24.541 14:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.541 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:24.541 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:24.541 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:12:24.541 14:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.541 14:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.541 [2024-11-27 14:12:01.565608] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:12:24.541 [2024-11-27 14:12:01.565678] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:24.541 [2024-11-27 14:12:01.565705] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:12:24.541 [2024-11-27 14:12:01.565718] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:24.541 [2024-11-27 14:12:01.566282] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:24.541 [2024-11-27 14:12:01.566326] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:12:24.541 [2024-11-27 14:12:01.566436] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:12:24.541 [2024-11-27 14:12:01.566472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:12:24.541 [2024-11-27 14:12:01.566681] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:24.541 [2024-11-27 14:12:01.566704] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:24.541 [2024-11-27 14:12:01.567034] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:12:24.541 [2024-11-27 14:12:01.567261] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:24.541 [2024-11-27 14:12:01.567282] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:12:24.541 [2024-11-27 14:12:01.567434] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:24.541 pt4 00:12:24.541 14:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.541 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:12:24.541 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:12:24.541 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:24.541 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:24.541 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:24.541 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:24.541 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:24.541 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:24.541 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:24.541 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:24.541 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:24.541 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:24.541 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:24.541 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:24.541 14:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.541 14:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:24.541 14:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.541 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:24.541 "name": "raid_bdev1", 00:12:24.541 "uuid": "9f58537c-f85b-404b-bb64-189ab046f5c8", 00:12:24.541 "strip_size_kb": 64, 00:12:24.541 "state": "online", 00:12:24.541 "raid_level": "raid0", 00:12:24.541 "superblock": true, 00:12:24.541 "num_base_bdevs": 4, 00:12:24.541 "num_base_bdevs_discovered": 4, 00:12:24.541 "num_base_bdevs_operational": 4, 00:12:24.541 "base_bdevs_list": [ 00:12:24.541 { 00:12:24.541 "name": "pt1", 00:12:24.541 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:24.541 "is_configured": true, 00:12:24.541 "data_offset": 2048, 00:12:24.541 "data_size": 63488 00:12:24.541 }, 00:12:24.541 { 00:12:24.541 "name": "pt2", 00:12:24.541 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:24.541 "is_configured": true, 00:12:24.541 "data_offset": 2048, 00:12:24.541 "data_size": 63488 00:12:24.541 }, 00:12:24.541 { 00:12:24.541 "name": "pt3", 00:12:24.541 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:24.541 "is_configured": true, 00:12:24.541 "data_offset": 2048, 00:12:24.541 "data_size": 63488 00:12:24.541 }, 00:12:24.541 { 00:12:24.541 "name": "pt4", 00:12:24.541 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:24.541 "is_configured": true, 00:12:24.541 "data_offset": 2048, 00:12:24.541 "data_size": 63488 00:12:24.541 } 00:12:24.541 ] 00:12:24.541 }' 00:12:24.541 14:12:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:24.541 14:12:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.108 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:12:25.108 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:12:25.108 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:25.108 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:25.108 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:25.108 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:25.108 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:25.108 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:25.108 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.108 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.108 [2024-11-27 14:12:02.118332] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:25.108 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.108 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:25.108 "name": "raid_bdev1", 00:12:25.108 "aliases": [ 00:12:25.108 "9f58537c-f85b-404b-bb64-189ab046f5c8" 00:12:25.108 ], 00:12:25.108 "product_name": "Raid Volume", 00:12:25.108 "block_size": 512, 00:12:25.108 "num_blocks": 253952, 00:12:25.108 "uuid": "9f58537c-f85b-404b-bb64-189ab046f5c8", 00:12:25.108 "assigned_rate_limits": { 00:12:25.108 "rw_ios_per_sec": 0, 00:12:25.108 "rw_mbytes_per_sec": 0, 00:12:25.108 "r_mbytes_per_sec": 0, 00:12:25.108 "w_mbytes_per_sec": 0 00:12:25.108 }, 00:12:25.108 "claimed": false, 00:12:25.108 "zoned": false, 00:12:25.108 "supported_io_types": { 00:12:25.108 "read": true, 00:12:25.108 "write": true, 00:12:25.108 "unmap": true, 00:12:25.108 "flush": true, 00:12:25.108 "reset": true, 00:12:25.108 "nvme_admin": false, 00:12:25.108 "nvme_io": false, 00:12:25.108 "nvme_io_md": false, 00:12:25.108 "write_zeroes": true, 00:12:25.108 "zcopy": false, 00:12:25.108 "get_zone_info": false, 00:12:25.108 "zone_management": false, 00:12:25.108 "zone_append": false, 00:12:25.108 "compare": false, 00:12:25.108 "compare_and_write": false, 00:12:25.108 "abort": false, 00:12:25.108 "seek_hole": false, 00:12:25.108 "seek_data": false, 00:12:25.108 "copy": false, 00:12:25.108 "nvme_iov_md": false 00:12:25.108 }, 00:12:25.108 "memory_domains": [ 00:12:25.108 { 00:12:25.108 "dma_device_id": "system", 00:12:25.108 "dma_device_type": 1 00:12:25.108 }, 00:12:25.108 { 00:12:25.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.108 "dma_device_type": 2 00:12:25.108 }, 00:12:25.108 { 00:12:25.108 "dma_device_id": "system", 00:12:25.108 "dma_device_type": 1 00:12:25.108 }, 00:12:25.108 { 00:12:25.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.108 "dma_device_type": 2 00:12:25.108 }, 00:12:25.108 { 00:12:25.108 "dma_device_id": "system", 00:12:25.108 "dma_device_type": 1 00:12:25.108 }, 00:12:25.108 { 00:12:25.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.108 "dma_device_type": 2 00:12:25.108 }, 00:12:25.108 { 00:12:25.108 "dma_device_id": "system", 00:12:25.108 "dma_device_type": 1 00:12:25.108 }, 00:12:25.108 { 00:12:25.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:25.108 "dma_device_type": 2 00:12:25.108 } 00:12:25.108 ], 00:12:25.108 "driver_specific": { 00:12:25.108 "raid": { 00:12:25.108 "uuid": "9f58537c-f85b-404b-bb64-189ab046f5c8", 00:12:25.108 "strip_size_kb": 64, 00:12:25.108 "state": "online", 00:12:25.108 "raid_level": "raid0", 00:12:25.108 "superblock": true, 00:12:25.108 "num_base_bdevs": 4, 00:12:25.108 "num_base_bdevs_discovered": 4, 00:12:25.108 "num_base_bdevs_operational": 4, 00:12:25.108 "base_bdevs_list": [ 00:12:25.108 { 00:12:25.108 "name": "pt1", 00:12:25.108 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:25.108 "is_configured": true, 00:12:25.108 "data_offset": 2048, 00:12:25.108 "data_size": 63488 00:12:25.108 }, 00:12:25.108 { 00:12:25.108 "name": "pt2", 00:12:25.108 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:25.108 "is_configured": true, 00:12:25.108 "data_offset": 2048, 00:12:25.108 "data_size": 63488 00:12:25.108 }, 00:12:25.108 { 00:12:25.108 "name": "pt3", 00:12:25.108 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:25.108 "is_configured": true, 00:12:25.108 "data_offset": 2048, 00:12:25.108 "data_size": 63488 00:12:25.108 }, 00:12:25.108 { 00:12:25.108 "name": "pt4", 00:12:25.108 "uuid": "00000000-0000-0000-0000-000000000004", 00:12:25.108 "is_configured": true, 00:12:25.108 "data_offset": 2048, 00:12:25.108 "data_size": 63488 00:12:25.108 } 00:12:25.108 ] 00:12:25.108 } 00:12:25.108 } 00:12:25.108 }' 00:12:25.109 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:25.109 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:12:25.109 pt2 00:12:25.109 pt3 00:12:25.109 pt4' 00:12:25.109 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.109 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:25.109 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.109 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:12:25.109 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.109 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.109 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.109 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.109 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.109 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.109 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.109 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:12:25.109 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.109 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.109 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.109 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.109 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.109 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.109 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.109 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:12:25.109 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.109 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.109 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.367 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.367 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.367 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.367 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:25.367 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:12:25.367 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.367 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.367 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:25.367 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.367 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:25.367 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:25.367 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:12:25.367 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.367 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:12:25.367 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:25.367 [2024-11-27 14:12:02.494396] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:25.367 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.367 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 9f58537c-f85b-404b-bb64-189ab046f5c8 '!=' 9f58537c-f85b-404b-bb64-189ab046f5c8 ']' 00:12:25.367 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid0 00:12:25.367 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:25.367 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:25.367 14:12:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 70733 00:12:25.367 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 70733 ']' 00:12:25.367 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 70733 00:12:25.367 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:12:25.367 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:25.367 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70733 00:12:25.367 killing process with pid 70733 00:12:25.367 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:25.367 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:25.367 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70733' 00:12:25.367 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 70733 00:12:25.367 [2024-11-27 14:12:02.573642] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:25.367 14:12:02 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 70733 00:12:25.367 [2024-11-27 14:12:02.573749] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:25.367 [2024-11-27 14:12:02.573913] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:25.367 [2024-11-27 14:12:02.573930] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:12:25.979 [2024-11-27 14:12:02.928212] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:26.916 14:12:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:12:26.916 00:12:26.916 real 0m5.942s 00:12:26.916 user 0m8.906s 00:12:26.916 sys 0m0.886s 00:12:26.916 ************************************ 00:12:26.916 END TEST raid_superblock_test 00:12:26.916 ************************************ 00:12:26.916 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:26.916 14:12:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.916 14:12:04 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:12:26.916 14:12:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:26.916 14:12:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:26.916 14:12:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:26.916 ************************************ 00:12:26.916 START TEST raid_read_error_test 00:12:26.916 ************************************ 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 read 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.aUlZhqe1kW 00:12:26.916 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=70999 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 70999 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 70999 ']' 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:26.916 14:12:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.916 [2024-11-27 14:12:04.161300] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:12:26.916 [2024-11-27 14:12:04.161485] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70999 ] 00:12:27.175 [2024-11-27 14:12:04.348979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:27.434 [2024-11-27 14:12:04.484209] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:27.434 [2024-11-27 14:12:04.684549] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:27.434 [2024-11-27 14:12:04.684602] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:28.002 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:28.002 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:28.002 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:28.002 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:28.002 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.002 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.002 BaseBdev1_malloc 00:12:28.002 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.002 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:28.002 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.002 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.002 true 00:12:28.002 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.002 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:28.002 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.002 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.002 [2024-11-27 14:12:05.217438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:28.002 [2024-11-27 14:12:05.217503] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.002 [2024-11-27 14:12:05.217534] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:28.002 [2024-11-27 14:12:05.217554] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.002 [2024-11-27 14:12:05.220398] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.002 [2024-11-27 14:12:05.220447] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:28.002 BaseBdev1 00:12:28.002 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.002 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:28.002 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:28.002 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.002 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.002 BaseBdev2_malloc 00:12:28.002 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.002 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:28.002 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.002 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.002 true 00:12:28.002 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.002 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:28.002 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.002 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.264 [2024-11-27 14:12:05.282008] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:28.264 [2024-11-27 14:12:05.282073] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.264 [2024-11-27 14:12:05.282103] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:28.264 [2024-11-27 14:12:05.282122] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.264 [2024-11-27 14:12:05.285028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.264 [2024-11-27 14:12:05.285109] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:28.264 BaseBdev2 00:12:28.264 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.264 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:28.264 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:28.264 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.264 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.264 BaseBdev3_malloc 00:12:28.264 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.264 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:28.264 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.264 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.264 true 00:12:28.264 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.264 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:28.264 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.264 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.264 [2024-11-27 14:12:05.364195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:28.264 [2024-11-27 14:12:05.364260] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.264 [2024-11-27 14:12:05.364289] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:28.264 [2024-11-27 14:12:05.364310] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.264 [2024-11-27 14:12:05.367364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.264 [2024-11-27 14:12:05.367430] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:28.264 BaseBdev3 00:12:28.264 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.264 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:28.264 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:28.264 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.264 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.264 BaseBdev4_malloc 00:12:28.264 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.264 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:28.264 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.264 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.264 true 00:12:28.264 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.264 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:28.264 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.264 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.264 [2024-11-27 14:12:05.425152] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:28.264 [2024-11-27 14:12:05.425248] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:28.264 [2024-11-27 14:12:05.425276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:28.264 [2024-11-27 14:12:05.425297] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:28.264 [2024-11-27 14:12:05.428146] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:28.264 [2024-11-27 14:12:05.428241] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:28.264 BaseBdev4 00:12:28.264 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.265 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:28.265 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.265 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.265 [2024-11-27 14:12:05.433246] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:28.265 [2024-11-27 14:12:05.435697] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:28.265 [2024-11-27 14:12:05.435834] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:28.265 [2024-11-27 14:12:05.435949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:28.265 [2024-11-27 14:12:05.436272] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:28.265 [2024-11-27 14:12:05.436308] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:28.265 [2024-11-27 14:12:05.436663] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:28.265 [2024-11-27 14:12:05.436926] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:28.265 [2024-11-27 14:12:05.436955] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:28.265 [2024-11-27 14:12:05.437197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:28.265 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.265 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:28.265 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:28.265 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:28.265 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:28.265 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:28.265 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:28.265 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:28.265 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:28.265 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:28.265 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:28.265 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.265 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:28.265 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:28.265 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.265 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:28.265 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:28.265 "name": "raid_bdev1", 00:12:28.265 "uuid": "61cba7a9-a9cf-4efe-aaad-a20db51c49ff", 00:12:28.265 "strip_size_kb": 64, 00:12:28.265 "state": "online", 00:12:28.265 "raid_level": "raid0", 00:12:28.265 "superblock": true, 00:12:28.265 "num_base_bdevs": 4, 00:12:28.265 "num_base_bdevs_discovered": 4, 00:12:28.265 "num_base_bdevs_operational": 4, 00:12:28.265 "base_bdevs_list": [ 00:12:28.265 { 00:12:28.265 "name": "BaseBdev1", 00:12:28.265 "uuid": "8bfea026-7ef3-5e4e-a34a-3f3028d06cf1", 00:12:28.265 "is_configured": true, 00:12:28.265 "data_offset": 2048, 00:12:28.265 "data_size": 63488 00:12:28.265 }, 00:12:28.265 { 00:12:28.265 "name": "BaseBdev2", 00:12:28.265 "uuid": "9ce6d5a2-b033-5092-b7db-ccb3633813b1", 00:12:28.265 "is_configured": true, 00:12:28.265 "data_offset": 2048, 00:12:28.265 "data_size": 63488 00:12:28.265 }, 00:12:28.265 { 00:12:28.265 "name": "BaseBdev3", 00:12:28.265 "uuid": "cfe112f4-3541-58f0-a151-4e8db8c6959a", 00:12:28.265 "is_configured": true, 00:12:28.265 "data_offset": 2048, 00:12:28.265 "data_size": 63488 00:12:28.265 }, 00:12:28.265 { 00:12:28.265 "name": "BaseBdev4", 00:12:28.265 "uuid": "bd45dd0a-8922-5997-88c0-f86c86599755", 00:12:28.265 "is_configured": true, 00:12:28.265 "data_offset": 2048, 00:12:28.265 "data_size": 63488 00:12:28.265 } 00:12:28.265 ] 00:12:28.265 }' 00:12:28.265 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:28.265 14:12:05 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.833 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:28.833 14:12:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:28.833 [2024-11-27 14:12:06.039211] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:29.771 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:29.771 14:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.771 14:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.771 14:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.771 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:29.771 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:29.771 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:29.771 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:29.771 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:29.771 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:29.771 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:29.771 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:29.771 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:29.771 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:29.771 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:29.771 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:29.771 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:29.771 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:29.771 14:12:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:29.771 14:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.771 14:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:29.771 14:12:06 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.771 14:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:29.771 "name": "raid_bdev1", 00:12:29.771 "uuid": "61cba7a9-a9cf-4efe-aaad-a20db51c49ff", 00:12:29.771 "strip_size_kb": 64, 00:12:29.771 "state": "online", 00:12:29.771 "raid_level": "raid0", 00:12:29.771 "superblock": true, 00:12:29.771 "num_base_bdevs": 4, 00:12:29.771 "num_base_bdevs_discovered": 4, 00:12:29.771 "num_base_bdevs_operational": 4, 00:12:29.771 "base_bdevs_list": [ 00:12:29.771 { 00:12:29.771 "name": "BaseBdev1", 00:12:29.771 "uuid": "8bfea026-7ef3-5e4e-a34a-3f3028d06cf1", 00:12:29.771 "is_configured": true, 00:12:29.771 "data_offset": 2048, 00:12:29.771 "data_size": 63488 00:12:29.771 }, 00:12:29.771 { 00:12:29.771 "name": "BaseBdev2", 00:12:29.771 "uuid": "9ce6d5a2-b033-5092-b7db-ccb3633813b1", 00:12:29.771 "is_configured": true, 00:12:29.772 "data_offset": 2048, 00:12:29.772 "data_size": 63488 00:12:29.772 }, 00:12:29.772 { 00:12:29.772 "name": "BaseBdev3", 00:12:29.772 "uuid": "cfe112f4-3541-58f0-a151-4e8db8c6959a", 00:12:29.772 "is_configured": true, 00:12:29.772 "data_offset": 2048, 00:12:29.772 "data_size": 63488 00:12:29.772 }, 00:12:29.772 { 00:12:29.772 "name": "BaseBdev4", 00:12:29.772 "uuid": "bd45dd0a-8922-5997-88c0-f86c86599755", 00:12:29.772 "is_configured": true, 00:12:29.772 "data_offset": 2048, 00:12:29.772 "data_size": 63488 00:12:29.772 } 00:12:29.772 ] 00:12:29.772 }' 00:12:29.772 14:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:29.772 14:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.340 14:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:30.340 14:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:30.340 14:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.340 [2024-11-27 14:12:07.482116] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:30.340 [2024-11-27 14:12:07.482174] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:30.340 [2024-11-27 14:12:07.485670] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:30.340 [2024-11-27 14:12:07.485773] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:30.340 [2024-11-27 14:12:07.485847] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:30.340 [2024-11-27 14:12:07.485868] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:30.340 { 00:12:30.340 "results": [ 00:12:30.340 { 00:12:30.340 "job": "raid_bdev1", 00:12:30.340 "core_mask": "0x1", 00:12:30.340 "workload": "randrw", 00:12:30.340 "percentage": 50, 00:12:30.340 "status": "finished", 00:12:30.340 "queue_depth": 1, 00:12:30.340 "io_size": 131072, 00:12:30.340 "runtime": 1.440371, 00:12:30.340 "iops": 9997.42427471811, 00:12:30.340 "mibps": 1249.6780343397638, 00:12:30.340 "io_failed": 1, 00:12:30.340 "io_timeout": 0, 00:12:30.340 "avg_latency_us": 139.28442848034544, 00:12:30.340 "min_latency_us": 39.09818181818182, 00:12:30.340 "max_latency_us": 1869.2654545454545 00:12:30.340 } 00:12:30.340 ], 00:12:30.340 "core_count": 1 00:12:30.340 } 00:12:30.340 14:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:30.340 14:12:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 70999 00:12:30.340 14:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 70999 ']' 00:12:30.340 14:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 70999 00:12:30.340 14:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:12:30.340 14:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:30.340 14:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70999 00:12:30.340 killing process with pid 70999 00:12:30.340 14:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:30.340 14:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:30.340 14:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70999' 00:12:30.340 14:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 70999 00:12:30.340 [2024-11-27 14:12:07.520857] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:30.340 14:12:07 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 70999 00:12:30.599 [2024-11-27 14:12:07.814241] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:31.976 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.aUlZhqe1kW 00:12:31.976 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:31.976 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:31.976 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:12:31.976 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:31.976 ************************************ 00:12:31.976 END TEST raid_read_error_test 00:12:31.976 ************************************ 00:12:31.976 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:31.976 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:31.976 14:12:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:12:31.976 00:12:31.976 real 0m4.898s 00:12:31.976 user 0m6.022s 00:12:31.976 sys 0m0.595s 00:12:31.976 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.976 14:12:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.976 14:12:08 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:12:31.976 14:12:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:31.976 14:12:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.976 14:12:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:31.976 ************************************ 00:12:31.976 START TEST raid_write_error_test 00:12:31.976 ************************************ 00:12:31.976 14:12:08 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid0 4 write 00:12:31.976 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid0 00:12:31.976 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:12:31.976 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:12:31.976 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:12:31.976 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:31.976 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:12:31.976 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:31.976 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:31.976 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:12:31.976 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:31.976 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:31.976 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:12:31.976 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:31.976 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:31.976 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:12:31.976 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:12:31.976 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:12:31.976 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:31.976 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:12:31.976 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:12:31.976 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:12:31.976 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:12:31.976 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:12:31.976 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:12:31.976 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid0 '!=' raid1 ']' 00:12:31.976 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:12:31.976 14:12:08 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:12:31.976 14:12:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:12:31.976 14:12:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.rIBY3hIl3a 00:12:31.976 14:12:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=71148 00:12:31.976 14:12:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:31.976 14:12:09 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 71148 00:12:31.976 14:12:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 71148 ']' 00:12:31.976 14:12:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:31.976 14:12:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:31.977 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:31.977 14:12:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:31.977 14:12:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:31.977 14:12:09 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:31.977 [2024-11-27 14:12:09.098761] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:12:31.977 [2024-11-27 14:12:09.098924] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71148 ] 00:12:32.235 [2024-11-27 14:12:09.274132] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:32.235 [2024-11-27 14:12:09.406824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.497 [2024-11-27 14:12:09.618864] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:32.497 [2024-11-27 14:12:09.618951] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:33.067 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:33.067 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:12:33.067 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:33.067 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:33.067 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.067 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.067 BaseBdev1_malloc 00:12:33.067 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.067 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:12:33.067 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.067 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.067 true 00:12:33.067 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.067 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:33.067 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.067 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.067 [2024-11-27 14:12:10.185871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:33.067 [2024-11-27 14:12:10.185946] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.067 [2024-11-27 14:12:10.185979] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:12:33.067 [2024-11-27 14:12:10.185998] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.067 [2024-11-27 14:12:10.188954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.067 [2024-11-27 14:12:10.189007] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:33.067 BaseBdev1 00:12:33.067 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.067 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:33.067 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:33.067 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.067 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.067 BaseBdev2_malloc 00:12:33.067 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.067 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:12:33.067 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.067 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.067 true 00:12:33.067 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.067 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:33.067 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.067 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.067 [2024-11-27 14:12:10.243722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:33.067 [2024-11-27 14:12:10.243832] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.067 [2024-11-27 14:12:10.243860] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:12:33.067 [2024-11-27 14:12:10.243879] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.068 [2024-11-27 14:12:10.246650] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.068 [2024-11-27 14:12:10.246700] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:33.068 BaseBdev2 00:12:33.068 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.068 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:33.068 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:33.068 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.068 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.068 BaseBdev3_malloc 00:12:33.068 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.068 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:12:33.068 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.068 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.068 true 00:12:33.068 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.068 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:33.068 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.068 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.068 [2024-11-27 14:12:10.312559] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:33.068 [2024-11-27 14:12:10.312640] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.068 [2024-11-27 14:12:10.312668] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:12:33.068 [2024-11-27 14:12:10.312687] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.068 [2024-11-27 14:12:10.315477] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.068 [2024-11-27 14:12:10.315526] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:33.068 BaseBdev3 00:12:33.068 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.068 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:12:33.068 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:12:33.068 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.068 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.367 BaseBdev4_malloc 00:12:33.367 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.367 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:12:33.367 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.367 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.367 true 00:12:33.367 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.367 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:12:33.367 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.367 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.367 [2024-11-27 14:12:10.374722] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:12:33.367 [2024-11-27 14:12:10.374816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.367 [2024-11-27 14:12:10.374849] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:12:33.367 [2024-11-27 14:12:10.374870] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.367 [2024-11-27 14:12:10.377717] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.367 [2024-11-27 14:12:10.377812] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:12:33.367 BaseBdev4 00:12:33.367 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.367 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r raid0 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:12:33.367 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.367 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.367 [2024-11-27 14:12:10.382853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:33.367 [2024-11-27 14:12:10.385290] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:33.367 [2024-11-27 14:12:10.385406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:33.367 [2024-11-27 14:12:10.385511] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:33.367 [2024-11-27 14:12:10.385826] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:12:33.367 [2024-11-27 14:12:10.385864] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:33.367 [2024-11-27 14:12:10.386206] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:12:33.367 [2024-11-27 14:12:10.386445] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:12:33.367 [2024-11-27 14:12:10.386474] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:12:33.367 [2024-11-27 14:12:10.386756] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:33.367 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.367 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:33.367 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:33.367 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:33.367 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:33.367 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:33.368 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:33.368 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:33.368 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:33.368 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:33.368 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:33.368 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:33.368 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:33.368 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.368 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.368 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.368 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:33.368 "name": "raid_bdev1", 00:12:33.368 "uuid": "1a2af2f9-760b-491c-8f74-79ad3c0a8676", 00:12:33.368 "strip_size_kb": 64, 00:12:33.368 "state": "online", 00:12:33.368 "raid_level": "raid0", 00:12:33.368 "superblock": true, 00:12:33.368 "num_base_bdevs": 4, 00:12:33.368 "num_base_bdevs_discovered": 4, 00:12:33.368 "num_base_bdevs_operational": 4, 00:12:33.368 "base_bdevs_list": [ 00:12:33.368 { 00:12:33.368 "name": "BaseBdev1", 00:12:33.368 "uuid": "ee7bca65-6ac8-523f-bd4f-226035f9cb38", 00:12:33.368 "is_configured": true, 00:12:33.368 "data_offset": 2048, 00:12:33.368 "data_size": 63488 00:12:33.368 }, 00:12:33.368 { 00:12:33.368 "name": "BaseBdev2", 00:12:33.368 "uuid": "b886b9b9-36a6-58d0-beca-716e970f786a", 00:12:33.368 "is_configured": true, 00:12:33.368 "data_offset": 2048, 00:12:33.368 "data_size": 63488 00:12:33.368 }, 00:12:33.368 { 00:12:33.368 "name": "BaseBdev3", 00:12:33.368 "uuid": "ce7ddf46-e4ec-5020-b8c3-f8cf60e6b728", 00:12:33.368 "is_configured": true, 00:12:33.368 "data_offset": 2048, 00:12:33.368 "data_size": 63488 00:12:33.368 }, 00:12:33.368 { 00:12:33.368 "name": "BaseBdev4", 00:12:33.368 "uuid": "18085f58-c271-5d2e-8a4c-04a7a1e64cc9", 00:12:33.368 "is_configured": true, 00:12:33.368 "data_offset": 2048, 00:12:33.368 "data_size": 63488 00:12:33.368 } 00:12:33.368 ] 00:12:33.368 }' 00:12:33.368 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:33.368 14:12:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.935 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:12:33.935 14:12:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:12:33.935 [2024-11-27 14:12:11.044340] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:12:34.872 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:34.872 14:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.872 14:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.872 14:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.872 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:12:34.872 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid0 = \r\a\i\d\1 ]] 00:12:34.872 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:12:34.872 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:12:34.872 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:12:34.872 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:34.872 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid0 00:12:34.872 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:34.872 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:34.872 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:34.872 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:34.872 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:34.872 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:34.872 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:34.872 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.872 14:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.872 14:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.872 14:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.872 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:34.872 "name": "raid_bdev1", 00:12:34.872 "uuid": "1a2af2f9-760b-491c-8f74-79ad3c0a8676", 00:12:34.872 "strip_size_kb": 64, 00:12:34.872 "state": "online", 00:12:34.872 "raid_level": "raid0", 00:12:34.872 "superblock": true, 00:12:34.872 "num_base_bdevs": 4, 00:12:34.872 "num_base_bdevs_discovered": 4, 00:12:34.872 "num_base_bdevs_operational": 4, 00:12:34.872 "base_bdevs_list": [ 00:12:34.872 { 00:12:34.872 "name": "BaseBdev1", 00:12:34.872 "uuid": "ee7bca65-6ac8-523f-bd4f-226035f9cb38", 00:12:34.872 "is_configured": true, 00:12:34.872 "data_offset": 2048, 00:12:34.872 "data_size": 63488 00:12:34.872 }, 00:12:34.872 { 00:12:34.872 "name": "BaseBdev2", 00:12:34.872 "uuid": "b886b9b9-36a6-58d0-beca-716e970f786a", 00:12:34.872 "is_configured": true, 00:12:34.872 "data_offset": 2048, 00:12:34.872 "data_size": 63488 00:12:34.872 }, 00:12:34.872 { 00:12:34.872 "name": "BaseBdev3", 00:12:34.872 "uuid": "ce7ddf46-e4ec-5020-b8c3-f8cf60e6b728", 00:12:34.872 "is_configured": true, 00:12:34.872 "data_offset": 2048, 00:12:34.872 "data_size": 63488 00:12:34.872 }, 00:12:34.872 { 00:12:34.872 "name": "BaseBdev4", 00:12:34.872 "uuid": "18085f58-c271-5d2e-8a4c-04a7a1e64cc9", 00:12:34.872 "is_configured": true, 00:12:34.872 "data_offset": 2048, 00:12:34.872 "data_size": 63488 00:12:34.872 } 00:12:34.872 ] 00:12:34.872 }' 00:12:34.872 14:12:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:34.872 14:12:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.440 14:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:12:35.440 14:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:35.440 14:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.440 [2024-11-27 14:12:12.490990] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:35.440 [2024-11-27 14:12:12.491034] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:35.440 [2024-11-27 14:12:12.494398] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:35.440 [2024-11-27 14:12:12.494479] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.440 [2024-11-27 14:12:12.494540] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:35.440 [2024-11-27 14:12:12.494559] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:12:35.440 { 00:12:35.440 "results": [ 00:12:35.440 { 00:12:35.440 "job": "raid_bdev1", 00:12:35.440 "core_mask": "0x1", 00:12:35.440 "workload": "randrw", 00:12:35.440 "percentage": 50, 00:12:35.440 "status": "finished", 00:12:35.440 "queue_depth": 1, 00:12:35.440 "io_size": 131072, 00:12:35.440 "runtime": 1.444245, 00:12:35.440 "iops": 10245.491589030948, 00:12:35.440 "mibps": 1280.6864486288684, 00:12:35.440 "io_failed": 1, 00:12:35.440 "io_timeout": 0, 00:12:35.440 "avg_latency_us": 135.67201071397855, 00:12:35.440 "min_latency_us": 38.86545454545455, 00:12:35.440 "max_latency_us": 1884.16 00:12:35.440 } 00:12:35.440 ], 00:12:35.440 "core_count": 1 00:12:35.440 } 00:12:35.440 14:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:35.440 14:12:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 71148 00:12:35.440 14:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 71148 ']' 00:12:35.440 14:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 71148 00:12:35.440 14:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:12:35.440 14:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:35.440 14:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71148 00:12:35.440 14:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:35.440 14:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:35.440 killing process with pid 71148 00:12:35.440 14:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71148' 00:12:35.440 14:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 71148 00:12:35.440 [2024-11-27 14:12:12.528153] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:35.440 14:12:12 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 71148 00:12:35.737 [2024-11-27 14:12:12.821766] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:36.759 14:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:12:36.759 14:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.rIBY3hIl3a 00:12:36.759 14:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:12:36.759 14:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:12:36.759 14:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid0 00:12:36.759 14:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:36.759 14:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:36.759 14:12:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:12:36.759 00:12:36.759 real 0m4.945s 00:12:36.759 user 0m6.159s 00:12:36.759 sys 0m0.602s 00:12:36.759 14:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:36.759 ************************************ 00:12:36.759 END TEST raid_write_error_test 00:12:36.759 ************************************ 00:12:36.759 14:12:13 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.759 14:12:13 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:12:36.759 14:12:13 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:12:36.759 14:12:13 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:36.759 14:12:13 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:36.759 14:12:13 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:36.759 ************************************ 00:12:36.759 START TEST raid_state_function_test 00:12:36.759 ************************************ 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 false 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=71297 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71297' 00:12:36.759 Process raid pid: 71297 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 71297 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 71297 ']' 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:36.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:36.759 14:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:37.018 [2024-11-27 14:12:14.106533] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:12:37.018 [2024-11-27 14:12:14.106742] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.018 [2024-11-27 14:12:14.291911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.276 [2024-11-27 14:12:14.421797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.534 [2024-11-27 14:12:14.629041] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:37.534 [2024-11-27 14:12:14.629079] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:38.100 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:38.100 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:12:38.100 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:38.100 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.100 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.100 [2024-11-27 14:12:15.119672] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:38.100 [2024-11-27 14:12:15.119734] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:38.100 [2024-11-27 14:12:15.119750] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:38.100 [2024-11-27 14:12:15.119787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:38.100 [2024-11-27 14:12:15.119800] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:38.100 [2024-11-27 14:12:15.119815] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:38.100 [2024-11-27 14:12:15.119825] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:38.100 [2024-11-27 14:12:15.119838] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:38.100 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.100 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:38.100 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.100 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.100 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:38.100 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:38.100 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.100 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.100 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.100 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.100 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.100 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.100 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.100 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.100 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.100 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.100 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.100 "name": "Existed_Raid", 00:12:38.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.100 "strip_size_kb": 64, 00:12:38.100 "state": "configuring", 00:12:38.100 "raid_level": "concat", 00:12:38.100 "superblock": false, 00:12:38.100 "num_base_bdevs": 4, 00:12:38.100 "num_base_bdevs_discovered": 0, 00:12:38.100 "num_base_bdevs_operational": 4, 00:12:38.100 "base_bdevs_list": [ 00:12:38.100 { 00:12:38.100 "name": "BaseBdev1", 00:12:38.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.100 "is_configured": false, 00:12:38.100 "data_offset": 0, 00:12:38.100 "data_size": 0 00:12:38.100 }, 00:12:38.100 { 00:12:38.100 "name": "BaseBdev2", 00:12:38.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.100 "is_configured": false, 00:12:38.100 "data_offset": 0, 00:12:38.100 "data_size": 0 00:12:38.100 }, 00:12:38.100 { 00:12:38.100 "name": "BaseBdev3", 00:12:38.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.100 "is_configured": false, 00:12:38.100 "data_offset": 0, 00:12:38.100 "data_size": 0 00:12:38.100 }, 00:12:38.100 { 00:12:38.100 "name": "BaseBdev4", 00:12:38.100 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.100 "is_configured": false, 00:12:38.100 "data_offset": 0, 00:12:38.100 "data_size": 0 00:12:38.100 } 00:12:38.100 ] 00:12:38.100 }' 00:12:38.100 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.100 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.359 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:38.359 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.359 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.359 [2024-11-27 14:12:15.627802] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:38.359 [2024-11-27 14:12:15.627846] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:38.359 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.359 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:38.359 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.359 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.617 [2024-11-27 14:12:15.635824] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:38.617 [2024-11-27 14:12:15.635881] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:38.617 [2024-11-27 14:12:15.635896] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:38.617 [2024-11-27 14:12:15.635912] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:38.617 [2024-11-27 14:12:15.635922] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:38.617 [2024-11-27 14:12:15.635936] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:38.617 [2024-11-27 14:12:15.635946] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:38.617 [2024-11-27 14:12:15.635959] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:38.617 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.617 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.618 [2024-11-27 14:12:15.680695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:38.618 BaseBdev1 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.618 [ 00:12:38.618 { 00:12:38.618 "name": "BaseBdev1", 00:12:38.618 "aliases": [ 00:12:38.618 "d018a4e3-8c22-4cde-9789-ef72c588e9eb" 00:12:38.618 ], 00:12:38.618 "product_name": "Malloc disk", 00:12:38.618 "block_size": 512, 00:12:38.618 "num_blocks": 65536, 00:12:38.618 "uuid": "d018a4e3-8c22-4cde-9789-ef72c588e9eb", 00:12:38.618 "assigned_rate_limits": { 00:12:38.618 "rw_ios_per_sec": 0, 00:12:38.618 "rw_mbytes_per_sec": 0, 00:12:38.618 "r_mbytes_per_sec": 0, 00:12:38.618 "w_mbytes_per_sec": 0 00:12:38.618 }, 00:12:38.618 "claimed": true, 00:12:38.618 "claim_type": "exclusive_write", 00:12:38.618 "zoned": false, 00:12:38.618 "supported_io_types": { 00:12:38.618 "read": true, 00:12:38.618 "write": true, 00:12:38.618 "unmap": true, 00:12:38.618 "flush": true, 00:12:38.618 "reset": true, 00:12:38.618 "nvme_admin": false, 00:12:38.618 "nvme_io": false, 00:12:38.618 "nvme_io_md": false, 00:12:38.618 "write_zeroes": true, 00:12:38.618 "zcopy": true, 00:12:38.618 "get_zone_info": false, 00:12:38.618 "zone_management": false, 00:12:38.618 "zone_append": false, 00:12:38.618 "compare": false, 00:12:38.618 "compare_and_write": false, 00:12:38.618 "abort": true, 00:12:38.618 "seek_hole": false, 00:12:38.618 "seek_data": false, 00:12:38.618 "copy": true, 00:12:38.618 "nvme_iov_md": false 00:12:38.618 }, 00:12:38.618 "memory_domains": [ 00:12:38.618 { 00:12:38.618 "dma_device_id": "system", 00:12:38.618 "dma_device_type": 1 00:12:38.618 }, 00:12:38.618 { 00:12:38.618 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:38.618 "dma_device_type": 2 00:12:38.618 } 00:12:38.618 ], 00:12:38.618 "driver_specific": {} 00:12:38.618 } 00:12:38.618 ] 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:38.618 "name": "Existed_Raid", 00:12:38.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.618 "strip_size_kb": 64, 00:12:38.618 "state": "configuring", 00:12:38.618 "raid_level": "concat", 00:12:38.618 "superblock": false, 00:12:38.618 "num_base_bdevs": 4, 00:12:38.618 "num_base_bdevs_discovered": 1, 00:12:38.618 "num_base_bdevs_operational": 4, 00:12:38.618 "base_bdevs_list": [ 00:12:38.618 { 00:12:38.618 "name": "BaseBdev1", 00:12:38.618 "uuid": "d018a4e3-8c22-4cde-9789-ef72c588e9eb", 00:12:38.618 "is_configured": true, 00:12:38.618 "data_offset": 0, 00:12:38.618 "data_size": 65536 00:12:38.618 }, 00:12:38.618 { 00:12:38.618 "name": "BaseBdev2", 00:12:38.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.618 "is_configured": false, 00:12:38.618 "data_offset": 0, 00:12:38.618 "data_size": 0 00:12:38.618 }, 00:12:38.618 { 00:12:38.618 "name": "BaseBdev3", 00:12:38.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.618 "is_configured": false, 00:12:38.618 "data_offset": 0, 00:12:38.618 "data_size": 0 00:12:38.618 }, 00:12:38.618 { 00:12:38.618 "name": "BaseBdev4", 00:12:38.618 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:38.618 "is_configured": false, 00:12:38.618 "data_offset": 0, 00:12:38.618 "data_size": 0 00:12:38.618 } 00:12:38.618 ] 00:12:38.618 }' 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:38.618 14:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.185 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:39.185 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.185 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.185 [2024-11-27 14:12:16.232982] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:39.185 [2024-11-27 14:12:16.233049] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:39.185 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.185 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:39.185 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.185 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.185 [2024-11-27 14:12:16.241083] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:39.185 [2024-11-27 14:12:16.243745] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:39.185 [2024-11-27 14:12:16.243833] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:39.185 [2024-11-27 14:12:16.243854] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:39.185 [2024-11-27 14:12:16.243872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:39.185 [2024-11-27 14:12:16.243883] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:39.185 [2024-11-27 14:12:16.243896] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:39.185 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.185 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:39.185 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:39.185 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:39.185 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.185 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:39.185 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:39.185 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:39.185 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:39.185 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.185 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.185 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.185 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.185 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.185 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.185 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.185 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.185 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.185 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.185 "name": "Existed_Raid", 00:12:39.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.185 "strip_size_kb": 64, 00:12:39.185 "state": "configuring", 00:12:39.185 "raid_level": "concat", 00:12:39.185 "superblock": false, 00:12:39.185 "num_base_bdevs": 4, 00:12:39.185 "num_base_bdevs_discovered": 1, 00:12:39.185 "num_base_bdevs_operational": 4, 00:12:39.185 "base_bdevs_list": [ 00:12:39.185 { 00:12:39.185 "name": "BaseBdev1", 00:12:39.185 "uuid": "d018a4e3-8c22-4cde-9789-ef72c588e9eb", 00:12:39.185 "is_configured": true, 00:12:39.185 "data_offset": 0, 00:12:39.185 "data_size": 65536 00:12:39.185 }, 00:12:39.185 { 00:12:39.185 "name": "BaseBdev2", 00:12:39.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.185 "is_configured": false, 00:12:39.185 "data_offset": 0, 00:12:39.185 "data_size": 0 00:12:39.186 }, 00:12:39.186 { 00:12:39.186 "name": "BaseBdev3", 00:12:39.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.186 "is_configured": false, 00:12:39.186 "data_offset": 0, 00:12:39.186 "data_size": 0 00:12:39.186 }, 00:12:39.186 { 00:12:39.186 "name": "BaseBdev4", 00:12:39.186 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.186 "is_configured": false, 00:12:39.186 "data_offset": 0, 00:12:39.186 "data_size": 0 00:12:39.186 } 00:12:39.186 ] 00:12:39.186 }' 00:12:39.186 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.186 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.753 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:39.753 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.753 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.753 [2024-11-27 14:12:16.864370] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:39.753 BaseBdev2 00:12:39.753 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.753 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:39.753 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:39.753 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:39.753 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:39.753 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:39.753 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:39.753 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:39.753 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.753 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.753 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.753 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:39.753 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.753 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.753 [ 00:12:39.753 { 00:12:39.753 "name": "BaseBdev2", 00:12:39.753 "aliases": [ 00:12:39.753 "c3f6ce1a-1fee-43e3-b57b-44cb041e7bb5" 00:12:39.753 ], 00:12:39.753 "product_name": "Malloc disk", 00:12:39.753 "block_size": 512, 00:12:39.753 "num_blocks": 65536, 00:12:39.753 "uuid": "c3f6ce1a-1fee-43e3-b57b-44cb041e7bb5", 00:12:39.753 "assigned_rate_limits": { 00:12:39.753 "rw_ios_per_sec": 0, 00:12:39.753 "rw_mbytes_per_sec": 0, 00:12:39.753 "r_mbytes_per_sec": 0, 00:12:39.753 "w_mbytes_per_sec": 0 00:12:39.753 }, 00:12:39.753 "claimed": true, 00:12:39.753 "claim_type": "exclusive_write", 00:12:39.753 "zoned": false, 00:12:39.753 "supported_io_types": { 00:12:39.753 "read": true, 00:12:39.753 "write": true, 00:12:39.753 "unmap": true, 00:12:39.753 "flush": true, 00:12:39.753 "reset": true, 00:12:39.753 "nvme_admin": false, 00:12:39.753 "nvme_io": false, 00:12:39.753 "nvme_io_md": false, 00:12:39.753 "write_zeroes": true, 00:12:39.753 "zcopy": true, 00:12:39.753 "get_zone_info": false, 00:12:39.753 "zone_management": false, 00:12:39.753 "zone_append": false, 00:12:39.753 "compare": false, 00:12:39.753 "compare_and_write": false, 00:12:39.753 "abort": true, 00:12:39.753 "seek_hole": false, 00:12:39.753 "seek_data": false, 00:12:39.753 "copy": true, 00:12:39.753 "nvme_iov_md": false 00:12:39.753 }, 00:12:39.753 "memory_domains": [ 00:12:39.753 { 00:12:39.753 "dma_device_id": "system", 00:12:39.753 "dma_device_type": 1 00:12:39.753 }, 00:12:39.753 { 00:12:39.753 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:39.753 "dma_device_type": 2 00:12:39.753 } 00:12:39.753 ], 00:12:39.753 "driver_specific": {} 00:12:39.753 } 00:12:39.753 ] 00:12:39.753 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.753 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:39.754 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:39.754 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:39.754 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:39.754 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:39.754 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:39.754 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:39.754 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:39.754 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:39.754 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:39.754 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:39.754 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:39.754 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:39.754 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:39.754 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:39.754 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:39.754 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:39.754 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:39.754 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:39.754 "name": "Existed_Raid", 00:12:39.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.754 "strip_size_kb": 64, 00:12:39.754 "state": "configuring", 00:12:39.754 "raid_level": "concat", 00:12:39.754 "superblock": false, 00:12:39.754 "num_base_bdevs": 4, 00:12:39.754 "num_base_bdevs_discovered": 2, 00:12:39.754 "num_base_bdevs_operational": 4, 00:12:39.754 "base_bdevs_list": [ 00:12:39.754 { 00:12:39.754 "name": "BaseBdev1", 00:12:39.754 "uuid": "d018a4e3-8c22-4cde-9789-ef72c588e9eb", 00:12:39.754 "is_configured": true, 00:12:39.754 "data_offset": 0, 00:12:39.754 "data_size": 65536 00:12:39.754 }, 00:12:39.754 { 00:12:39.754 "name": "BaseBdev2", 00:12:39.754 "uuid": "c3f6ce1a-1fee-43e3-b57b-44cb041e7bb5", 00:12:39.754 "is_configured": true, 00:12:39.754 "data_offset": 0, 00:12:39.754 "data_size": 65536 00:12:39.754 }, 00:12:39.754 { 00:12:39.754 "name": "BaseBdev3", 00:12:39.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.754 "is_configured": false, 00:12:39.754 "data_offset": 0, 00:12:39.754 "data_size": 0 00:12:39.754 }, 00:12:39.754 { 00:12:39.754 "name": "BaseBdev4", 00:12:39.754 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:39.754 "is_configured": false, 00:12:39.754 "data_offset": 0, 00:12:39.754 "data_size": 0 00:12:39.754 } 00:12:39.754 ] 00:12:39.754 }' 00:12:39.754 14:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:39.754 14:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.322 [2024-11-27 14:12:17.457856] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:40.322 BaseBdev3 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.322 [ 00:12:40.322 { 00:12:40.322 "name": "BaseBdev3", 00:12:40.322 "aliases": [ 00:12:40.322 "86b50702-d70f-4935-b7e9-e930798b68ef" 00:12:40.322 ], 00:12:40.322 "product_name": "Malloc disk", 00:12:40.322 "block_size": 512, 00:12:40.322 "num_blocks": 65536, 00:12:40.322 "uuid": "86b50702-d70f-4935-b7e9-e930798b68ef", 00:12:40.322 "assigned_rate_limits": { 00:12:40.322 "rw_ios_per_sec": 0, 00:12:40.322 "rw_mbytes_per_sec": 0, 00:12:40.322 "r_mbytes_per_sec": 0, 00:12:40.322 "w_mbytes_per_sec": 0 00:12:40.322 }, 00:12:40.322 "claimed": true, 00:12:40.322 "claim_type": "exclusive_write", 00:12:40.322 "zoned": false, 00:12:40.322 "supported_io_types": { 00:12:40.322 "read": true, 00:12:40.322 "write": true, 00:12:40.322 "unmap": true, 00:12:40.322 "flush": true, 00:12:40.322 "reset": true, 00:12:40.322 "nvme_admin": false, 00:12:40.322 "nvme_io": false, 00:12:40.322 "nvme_io_md": false, 00:12:40.322 "write_zeroes": true, 00:12:40.322 "zcopy": true, 00:12:40.322 "get_zone_info": false, 00:12:40.322 "zone_management": false, 00:12:40.322 "zone_append": false, 00:12:40.322 "compare": false, 00:12:40.322 "compare_and_write": false, 00:12:40.322 "abort": true, 00:12:40.322 "seek_hole": false, 00:12:40.322 "seek_data": false, 00:12:40.322 "copy": true, 00:12:40.322 "nvme_iov_md": false 00:12:40.322 }, 00:12:40.322 "memory_domains": [ 00:12:40.322 { 00:12:40.322 "dma_device_id": "system", 00:12:40.322 "dma_device_type": 1 00:12:40.322 }, 00:12:40.322 { 00:12:40.322 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.322 "dma_device_type": 2 00:12:40.322 } 00:12:40.322 ], 00:12:40.322 "driver_specific": {} 00:12:40.322 } 00:12:40.322 ] 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.322 "name": "Existed_Raid", 00:12:40.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.322 "strip_size_kb": 64, 00:12:40.322 "state": "configuring", 00:12:40.322 "raid_level": "concat", 00:12:40.322 "superblock": false, 00:12:40.322 "num_base_bdevs": 4, 00:12:40.322 "num_base_bdevs_discovered": 3, 00:12:40.322 "num_base_bdevs_operational": 4, 00:12:40.322 "base_bdevs_list": [ 00:12:40.322 { 00:12:40.322 "name": "BaseBdev1", 00:12:40.322 "uuid": "d018a4e3-8c22-4cde-9789-ef72c588e9eb", 00:12:40.322 "is_configured": true, 00:12:40.322 "data_offset": 0, 00:12:40.322 "data_size": 65536 00:12:40.322 }, 00:12:40.322 { 00:12:40.322 "name": "BaseBdev2", 00:12:40.322 "uuid": "c3f6ce1a-1fee-43e3-b57b-44cb041e7bb5", 00:12:40.322 "is_configured": true, 00:12:40.322 "data_offset": 0, 00:12:40.322 "data_size": 65536 00:12:40.322 }, 00:12:40.322 { 00:12:40.322 "name": "BaseBdev3", 00:12:40.322 "uuid": "86b50702-d70f-4935-b7e9-e930798b68ef", 00:12:40.322 "is_configured": true, 00:12:40.322 "data_offset": 0, 00:12:40.322 "data_size": 65536 00:12:40.322 }, 00:12:40.322 { 00:12:40.322 "name": "BaseBdev4", 00:12:40.322 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:40.322 "is_configured": false, 00:12:40.322 "data_offset": 0, 00:12:40.322 "data_size": 0 00:12:40.322 } 00:12:40.322 ] 00:12:40.322 }' 00:12:40.322 14:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.323 14:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.890 [2024-11-27 14:12:18.045016] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:40.890 [2024-11-27 14:12:18.045097] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:40.890 [2024-11-27 14:12:18.045126] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:40.890 [2024-11-27 14:12:18.045487] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:40.890 [2024-11-27 14:12:18.045704] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:40.890 [2024-11-27 14:12:18.045734] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:40.890 [2024-11-27 14:12:18.046068] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:40.890 BaseBdev4 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.890 [ 00:12:40.890 { 00:12:40.890 "name": "BaseBdev4", 00:12:40.890 "aliases": [ 00:12:40.890 "b57979b9-5761-40a1-acb8-8804cd1eab7f" 00:12:40.890 ], 00:12:40.890 "product_name": "Malloc disk", 00:12:40.890 "block_size": 512, 00:12:40.890 "num_blocks": 65536, 00:12:40.890 "uuid": "b57979b9-5761-40a1-acb8-8804cd1eab7f", 00:12:40.890 "assigned_rate_limits": { 00:12:40.890 "rw_ios_per_sec": 0, 00:12:40.890 "rw_mbytes_per_sec": 0, 00:12:40.890 "r_mbytes_per_sec": 0, 00:12:40.890 "w_mbytes_per_sec": 0 00:12:40.890 }, 00:12:40.890 "claimed": true, 00:12:40.890 "claim_type": "exclusive_write", 00:12:40.890 "zoned": false, 00:12:40.890 "supported_io_types": { 00:12:40.890 "read": true, 00:12:40.890 "write": true, 00:12:40.890 "unmap": true, 00:12:40.890 "flush": true, 00:12:40.890 "reset": true, 00:12:40.890 "nvme_admin": false, 00:12:40.890 "nvme_io": false, 00:12:40.890 "nvme_io_md": false, 00:12:40.890 "write_zeroes": true, 00:12:40.890 "zcopy": true, 00:12:40.890 "get_zone_info": false, 00:12:40.890 "zone_management": false, 00:12:40.890 "zone_append": false, 00:12:40.890 "compare": false, 00:12:40.890 "compare_and_write": false, 00:12:40.890 "abort": true, 00:12:40.890 "seek_hole": false, 00:12:40.890 "seek_data": false, 00:12:40.890 "copy": true, 00:12:40.890 "nvme_iov_md": false 00:12:40.890 }, 00:12:40.890 "memory_domains": [ 00:12:40.890 { 00:12:40.890 "dma_device_id": "system", 00:12:40.890 "dma_device_type": 1 00:12:40.890 }, 00:12:40.890 { 00:12:40.890 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:40.890 "dma_device_type": 2 00:12:40.890 } 00:12:40.890 ], 00:12:40.890 "driver_specific": {} 00:12:40.890 } 00:12:40.890 ] 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.890 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:40.890 "name": "Existed_Raid", 00:12:40.890 "uuid": "1412d20b-523a-4b2c-8535-79a2bb78688b", 00:12:40.890 "strip_size_kb": 64, 00:12:40.890 "state": "online", 00:12:40.890 "raid_level": "concat", 00:12:40.890 "superblock": false, 00:12:40.891 "num_base_bdevs": 4, 00:12:40.891 "num_base_bdevs_discovered": 4, 00:12:40.891 "num_base_bdevs_operational": 4, 00:12:40.891 "base_bdevs_list": [ 00:12:40.891 { 00:12:40.891 "name": "BaseBdev1", 00:12:40.891 "uuid": "d018a4e3-8c22-4cde-9789-ef72c588e9eb", 00:12:40.891 "is_configured": true, 00:12:40.891 "data_offset": 0, 00:12:40.891 "data_size": 65536 00:12:40.891 }, 00:12:40.891 { 00:12:40.891 "name": "BaseBdev2", 00:12:40.891 "uuid": "c3f6ce1a-1fee-43e3-b57b-44cb041e7bb5", 00:12:40.891 "is_configured": true, 00:12:40.891 "data_offset": 0, 00:12:40.891 "data_size": 65536 00:12:40.891 }, 00:12:40.891 { 00:12:40.891 "name": "BaseBdev3", 00:12:40.891 "uuid": "86b50702-d70f-4935-b7e9-e930798b68ef", 00:12:40.891 "is_configured": true, 00:12:40.891 "data_offset": 0, 00:12:40.891 "data_size": 65536 00:12:40.891 }, 00:12:40.891 { 00:12:40.891 "name": "BaseBdev4", 00:12:40.891 "uuid": "b57979b9-5761-40a1-acb8-8804cd1eab7f", 00:12:40.891 "is_configured": true, 00:12:40.891 "data_offset": 0, 00:12:40.891 "data_size": 65536 00:12:40.891 } 00:12:40.891 ] 00:12:40.891 }' 00:12:40.891 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:40.891 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.459 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:41.459 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:41.459 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:41.459 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:41.459 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:41.459 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:41.459 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:41.459 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:41.459 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.459 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.459 [2024-11-27 14:12:18.629659] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:41.459 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.459 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:41.459 "name": "Existed_Raid", 00:12:41.459 "aliases": [ 00:12:41.459 "1412d20b-523a-4b2c-8535-79a2bb78688b" 00:12:41.459 ], 00:12:41.459 "product_name": "Raid Volume", 00:12:41.459 "block_size": 512, 00:12:41.459 "num_blocks": 262144, 00:12:41.459 "uuid": "1412d20b-523a-4b2c-8535-79a2bb78688b", 00:12:41.459 "assigned_rate_limits": { 00:12:41.459 "rw_ios_per_sec": 0, 00:12:41.459 "rw_mbytes_per_sec": 0, 00:12:41.459 "r_mbytes_per_sec": 0, 00:12:41.459 "w_mbytes_per_sec": 0 00:12:41.459 }, 00:12:41.459 "claimed": false, 00:12:41.459 "zoned": false, 00:12:41.459 "supported_io_types": { 00:12:41.459 "read": true, 00:12:41.459 "write": true, 00:12:41.459 "unmap": true, 00:12:41.459 "flush": true, 00:12:41.459 "reset": true, 00:12:41.459 "nvme_admin": false, 00:12:41.459 "nvme_io": false, 00:12:41.459 "nvme_io_md": false, 00:12:41.459 "write_zeroes": true, 00:12:41.459 "zcopy": false, 00:12:41.459 "get_zone_info": false, 00:12:41.459 "zone_management": false, 00:12:41.459 "zone_append": false, 00:12:41.459 "compare": false, 00:12:41.459 "compare_and_write": false, 00:12:41.459 "abort": false, 00:12:41.459 "seek_hole": false, 00:12:41.459 "seek_data": false, 00:12:41.459 "copy": false, 00:12:41.459 "nvme_iov_md": false 00:12:41.459 }, 00:12:41.459 "memory_domains": [ 00:12:41.459 { 00:12:41.459 "dma_device_id": "system", 00:12:41.459 "dma_device_type": 1 00:12:41.459 }, 00:12:41.459 { 00:12:41.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.459 "dma_device_type": 2 00:12:41.459 }, 00:12:41.459 { 00:12:41.459 "dma_device_id": "system", 00:12:41.459 "dma_device_type": 1 00:12:41.459 }, 00:12:41.459 { 00:12:41.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.459 "dma_device_type": 2 00:12:41.459 }, 00:12:41.459 { 00:12:41.459 "dma_device_id": "system", 00:12:41.459 "dma_device_type": 1 00:12:41.459 }, 00:12:41.459 { 00:12:41.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.459 "dma_device_type": 2 00:12:41.459 }, 00:12:41.459 { 00:12:41.459 "dma_device_id": "system", 00:12:41.459 "dma_device_type": 1 00:12:41.459 }, 00:12:41.459 { 00:12:41.459 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:41.459 "dma_device_type": 2 00:12:41.459 } 00:12:41.459 ], 00:12:41.459 "driver_specific": { 00:12:41.459 "raid": { 00:12:41.459 "uuid": "1412d20b-523a-4b2c-8535-79a2bb78688b", 00:12:41.459 "strip_size_kb": 64, 00:12:41.459 "state": "online", 00:12:41.459 "raid_level": "concat", 00:12:41.459 "superblock": false, 00:12:41.459 "num_base_bdevs": 4, 00:12:41.459 "num_base_bdevs_discovered": 4, 00:12:41.459 "num_base_bdevs_operational": 4, 00:12:41.459 "base_bdevs_list": [ 00:12:41.459 { 00:12:41.459 "name": "BaseBdev1", 00:12:41.459 "uuid": "d018a4e3-8c22-4cde-9789-ef72c588e9eb", 00:12:41.459 "is_configured": true, 00:12:41.459 "data_offset": 0, 00:12:41.459 "data_size": 65536 00:12:41.459 }, 00:12:41.459 { 00:12:41.459 "name": "BaseBdev2", 00:12:41.459 "uuid": "c3f6ce1a-1fee-43e3-b57b-44cb041e7bb5", 00:12:41.459 "is_configured": true, 00:12:41.459 "data_offset": 0, 00:12:41.459 "data_size": 65536 00:12:41.459 }, 00:12:41.459 { 00:12:41.459 "name": "BaseBdev3", 00:12:41.459 "uuid": "86b50702-d70f-4935-b7e9-e930798b68ef", 00:12:41.459 "is_configured": true, 00:12:41.459 "data_offset": 0, 00:12:41.459 "data_size": 65536 00:12:41.459 }, 00:12:41.459 { 00:12:41.459 "name": "BaseBdev4", 00:12:41.459 "uuid": "b57979b9-5761-40a1-acb8-8804cd1eab7f", 00:12:41.459 "is_configured": true, 00:12:41.459 "data_offset": 0, 00:12:41.459 "data_size": 65536 00:12:41.459 } 00:12:41.459 ] 00:12:41.459 } 00:12:41.459 } 00:12:41.459 }' 00:12:41.459 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:41.459 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:41.459 BaseBdev2 00:12:41.459 BaseBdev3 00:12:41.459 BaseBdev4' 00:12:41.459 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.718 14:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.978 [2024-11-27 14:12:18.997410] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:41.978 [2024-11-27 14:12:18.997451] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:41.978 [2024-11-27 14:12:18.997521] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:41.978 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.978 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:41.978 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:41.979 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:41.979 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # return 1 00:12:41.979 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:41.979 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:12:41.979 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:41.979 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:41.979 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:41.979 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:41.979 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:41.979 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:41.979 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:41.979 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:41.979 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:41.979 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:41.979 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:41.979 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:41.979 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:41.979 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:41.979 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:41.979 "name": "Existed_Raid", 00:12:41.979 "uuid": "1412d20b-523a-4b2c-8535-79a2bb78688b", 00:12:41.979 "strip_size_kb": 64, 00:12:41.979 "state": "offline", 00:12:41.979 "raid_level": "concat", 00:12:41.979 "superblock": false, 00:12:41.979 "num_base_bdevs": 4, 00:12:41.979 "num_base_bdevs_discovered": 3, 00:12:41.979 "num_base_bdevs_operational": 3, 00:12:41.979 "base_bdevs_list": [ 00:12:41.979 { 00:12:41.979 "name": null, 00:12:41.979 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:41.979 "is_configured": false, 00:12:41.979 "data_offset": 0, 00:12:41.979 "data_size": 65536 00:12:41.979 }, 00:12:41.979 { 00:12:41.979 "name": "BaseBdev2", 00:12:41.979 "uuid": "c3f6ce1a-1fee-43e3-b57b-44cb041e7bb5", 00:12:41.979 "is_configured": true, 00:12:41.979 "data_offset": 0, 00:12:41.979 "data_size": 65536 00:12:41.979 }, 00:12:41.979 { 00:12:41.979 "name": "BaseBdev3", 00:12:41.979 "uuid": "86b50702-d70f-4935-b7e9-e930798b68ef", 00:12:41.979 "is_configured": true, 00:12:41.979 "data_offset": 0, 00:12:41.979 "data_size": 65536 00:12:41.979 }, 00:12:41.979 { 00:12:41.979 "name": "BaseBdev4", 00:12:41.979 "uuid": "b57979b9-5761-40a1-acb8-8804cd1eab7f", 00:12:41.979 "is_configured": true, 00:12:41.979 "data_offset": 0, 00:12:41.979 "data_size": 65536 00:12:41.979 } 00:12:41.979 ] 00:12:41.979 }' 00:12:41.979 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:41.979 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.544 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:42.544 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:42.544 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.544 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.544 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.544 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:42.544 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.544 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:42.544 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:42.544 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:42.544 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.544 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.544 [2024-11-27 14:12:19.708129] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:42.544 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.544 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:42.544 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:42.544 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.544 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.544 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.544 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:42.544 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.803 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:42.803 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:42.803 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:42.803 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.803 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.803 [2024-11-27 14:12:19.855933] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:42.803 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.803 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:42.803 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:42.803 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:42.803 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:42.803 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.803 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.803 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:42.803 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:42.803 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:42.803 14:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:42.803 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:42.803 14:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.803 [2024-11-27 14:12:20.002014] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:42.803 [2024-11-27 14:12:20.002077] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:43.062 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.062 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:43.062 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:43.062 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:43.062 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.062 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.062 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.062 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.062 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:43.062 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:43.062 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:43.062 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:43.062 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:43.062 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:43.062 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.062 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.062 BaseBdev2 00:12:43.062 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.062 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:43.062 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:43.062 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:43.062 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:43.062 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:43.062 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:43.062 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:43.062 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.063 [ 00:12:43.063 { 00:12:43.063 "name": "BaseBdev2", 00:12:43.063 "aliases": [ 00:12:43.063 "1e1b6bb2-1d60-4c95-93d4-fd0eff815429" 00:12:43.063 ], 00:12:43.063 "product_name": "Malloc disk", 00:12:43.063 "block_size": 512, 00:12:43.063 "num_blocks": 65536, 00:12:43.063 "uuid": "1e1b6bb2-1d60-4c95-93d4-fd0eff815429", 00:12:43.063 "assigned_rate_limits": { 00:12:43.063 "rw_ios_per_sec": 0, 00:12:43.063 "rw_mbytes_per_sec": 0, 00:12:43.063 "r_mbytes_per_sec": 0, 00:12:43.063 "w_mbytes_per_sec": 0 00:12:43.063 }, 00:12:43.063 "claimed": false, 00:12:43.063 "zoned": false, 00:12:43.063 "supported_io_types": { 00:12:43.063 "read": true, 00:12:43.063 "write": true, 00:12:43.063 "unmap": true, 00:12:43.063 "flush": true, 00:12:43.063 "reset": true, 00:12:43.063 "nvme_admin": false, 00:12:43.063 "nvme_io": false, 00:12:43.063 "nvme_io_md": false, 00:12:43.063 "write_zeroes": true, 00:12:43.063 "zcopy": true, 00:12:43.063 "get_zone_info": false, 00:12:43.063 "zone_management": false, 00:12:43.063 "zone_append": false, 00:12:43.063 "compare": false, 00:12:43.063 "compare_and_write": false, 00:12:43.063 "abort": true, 00:12:43.063 "seek_hole": false, 00:12:43.063 "seek_data": false, 00:12:43.063 "copy": true, 00:12:43.063 "nvme_iov_md": false 00:12:43.063 }, 00:12:43.063 "memory_domains": [ 00:12:43.063 { 00:12:43.063 "dma_device_id": "system", 00:12:43.063 "dma_device_type": 1 00:12:43.063 }, 00:12:43.063 { 00:12:43.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.063 "dma_device_type": 2 00:12:43.063 } 00:12:43.063 ], 00:12:43.063 "driver_specific": {} 00:12:43.063 } 00:12:43.063 ] 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.063 BaseBdev3 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.063 [ 00:12:43.063 { 00:12:43.063 "name": "BaseBdev3", 00:12:43.063 "aliases": [ 00:12:43.063 "624f0924-7764-44ea-af91-1bd007d822c6" 00:12:43.063 ], 00:12:43.063 "product_name": "Malloc disk", 00:12:43.063 "block_size": 512, 00:12:43.063 "num_blocks": 65536, 00:12:43.063 "uuid": "624f0924-7764-44ea-af91-1bd007d822c6", 00:12:43.063 "assigned_rate_limits": { 00:12:43.063 "rw_ios_per_sec": 0, 00:12:43.063 "rw_mbytes_per_sec": 0, 00:12:43.063 "r_mbytes_per_sec": 0, 00:12:43.063 "w_mbytes_per_sec": 0 00:12:43.063 }, 00:12:43.063 "claimed": false, 00:12:43.063 "zoned": false, 00:12:43.063 "supported_io_types": { 00:12:43.063 "read": true, 00:12:43.063 "write": true, 00:12:43.063 "unmap": true, 00:12:43.063 "flush": true, 00:12:43.063 "reset": true, 00:12:43.063 "nvme_admin": false, 00:12:43.063 "nvme_io": false, 00:12:43.063 "nvme_io_md": false, 00:12:43.063 "write_zeroes": true, 00:12:43.063 "zcopy": true, 00:12:43.063 "get_zone_info": false, 00:12:43.063 "zone_management": false, 00:12:43.063 "zone_append": false, 00:12:43.063 "compare": false, 00:12:43.063 "compare_and_write": false, 00:12:43.063 "abort": true, 00:12:43.063 "seek_hole": false, 00:12:43.063 "seek_data": false, 00:12:43.063 "copy": true, 00:12:43.063 "nvme_iov_md": false 00:12:43.063 }, 00:12:43.063 "memory_domains": [ 00:12:43.063 { 00:12:43.063 "dma_device_id": "system", 00:12:43.063 "dma_device_type": 1 00:12:43.063 }, 00:12:43.063 { 00:12:43.063 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.063 "dma_device_type": 2 00:12:43.063 } 00:12:43.063 ], 00:12:43.063 "driver_specific": {} 00:12:43.063 } 00:12:43.063 ] 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.063 BaseBdev4 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.063 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.322 [ 00:12:43.322 { 00:12:43.322 "name": "BaseBdev4", 00:12:43.323 "aliases": [ 00:12:43.323 "f3aef3ce-b430-4562-a029-135176fc3fd2" 00:12:43.323 ], 00:12:43.323 "product_name": "Malloc disk", 00:12:43.323 "block_size": 512, 00:12:43.323 "num_blocks": 65536, 00:12:43.323 "uuid": "f3aef3ce-b430-4562-a029-135176fc3fd2", 00:12:43.323 "assigned_rate_limits": { 00:12:43.323 "rw_ios_per_sec": 0, 00:12:43.323 "rw_mbytes_per_sec": 0, 00:12:43.323 "r_mbytes_per_sec": 0, 00:12:43.323 "w_mbytes_per_sec": 0 00:12:43.323 }, 00:12:43.323 "claimed": false, 00:12:43.323 "zoned": false, 00:12:43.323 "supported_io_types": { 00:12:43.323 "read": true, 00:12:43.323 "write": true, 00:12:43.323 "unmap": true, 00:12:43.323 "flush": true, 00:12:43.323 "reset": true, 00:12:43.323 "nvme_admin": false, 00:12:43.323 "nvme_io": false, 00:12:43.323 "nvme_io_md": false, 00:12:43.323 "write_zeroes": true, 00:12:43.323 "zcopy": true, 00:12:43.323 "get_zone_info": false, 00:12:43.323 "zone_management": false, 00:12:43.323 "zone_append": false, 00:12:43.323 "compare": false, 00:12:43.323 "compare_and_write": false, 00:12:43.323 "abort": true, 00:12:43.323 "seek_hole": false, 00:12:43.323 "seek_data": false, 00:12:43.323 "copy": true, 00:12:43.323 "nvme_iov_md": false 00:12:43.323 }, 00:12:43.323 "memory_domains": [ 00:12:43.323 { 00:12:43.323 "dma_device_id": "system", 00:12:43.323 "dma_device_type": 1 00:12:43.323 }, 00:12:43.323 { 00:12:43.323 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:43.323 "dma_device_type": 2 00:12:43.323 } 00:12:43.323 ], 00:12:43.323 "driver_specific": {} 00:12:43.323 } 00:12:43.323 ] 00:12:43.323 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.323 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:43.323 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:43.323 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:43.323 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:43.323 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.323 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.323 [2024-11-27 14:12:20.354541] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:43.323 [2024-11-27 14:12:20.354604] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:43.323 [2024-11-27 14:12:20.354635] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:43.323 [2024-11-27 14:12:20.357069] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:43.323 [2024-11-27 14:12:20.357154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:43.323 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.323 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:43.323 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.323 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.323 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:43.323 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.323 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:43.323 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.323 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.323 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.323 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.323 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.323 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.323 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.323 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.323 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.323 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.323 "name": "Existed_Raid", 00:12:43.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.323 "strip_size_kb": 64, 00:12:43.323 "state": "configuring", 00:12:43.323 "raid_level": "concat", 00:12:43.323 "superblock": false, 00:12:43.323 "num_base_bdevs": 4, 00:12:43.323 "num_base_bdevs_discovered": 3, 00:12:43.323 "num_base_bdevs_operational": 4, 00:12:43.323 "base_bdevs_list": [ 00:12:43.323 { 00:12:43.323 "name": "BaseBdev1", 00:12:43.323 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.323 "is_configured": false, 00:12:43.323 "data_offset": 0, 00:12:43.323 "data_size": 0 00:12:43.323 }, 00:12:43.323 { 00:12:43.323 "name": "BaseBdev2", 00:12:43.323 "uuid": "1e1b6bb2-1d60-4c95-93d4-fd0eff815429", 00:12:43.323 "is_configured": true, 00:12:43.323 "data_offset": 0, 00:12:43.323 "data_size": 65536 00:12:43.323 }, 00:12:43.323 { 00:12:43.323 "name": "BaseBdev3", 00:12:43.323 "uuid": "624f0924-7764-44ea-af91-1bd007d822c6", 00:12:43.323 "is_configured": true, 00:12:43.323 "data_offset": 0, 00:12:43.323 "data_size": 65536 00:12:43.323 }, 00:12:43.323 { 00:12:43.323 "name": "BaseBdev4", 00:12:43.323 "uuid": "f3aef3ce-b430-4562-a029-135176fc3fd2", 00:12:43.323 "is_configured": true, 00:12:43.323 "data_offset": 0, 00:12:43.323 "data_size": 65536 00:12:43.323 } 00:12:43.323 ] 00:12:43.323 }' 00:12:43.323 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.323 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.892 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:43.892 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.892 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.892 [2024-11-27 14:12:20.914760] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:43.892 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.892 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:43.892 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:43.892 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:43.892 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:43.892 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:43.892 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:43.892 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:43.892 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:43.892 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:43.892 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:43.892 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:43.892 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:43.892 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:43.892 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:43.892 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:43.892 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:43.893 "name": "Existed_Raid", 00:12:43.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.893 "strip_size_kb": 64, 00:12:43.893 "state": "configuring", 00:12:43.893 "raid_level": "concat", 00:12:43.893 "superblock": false, 00:12:43.893 "num_base_bdevs": 4, 00:12:43.893 "num_base_bdevs_discovered": 2, 00:12:43.893 "num_base_bdevs_operational": 4, 00:12:43.893 "base_bdevs_list": [ 00:12:43.893 { 00:12:43.893 "name": "BaseBdev1", 00:12:43.893 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:43.893 "is_configured": false, 00:12:43.893 "data_offset": 0, 00:12:43.893 "data_size": 0 00:12:43.893 }, 00:12:43.893 { 00:12:43.893 "name": null, 00:12:43.893 "uuid": "1e1b6bb2-1d60-4c95-93d4-fd0eff815429", 00:12:43.893 "is_configured": false, 00:12:43.893 "data_offset": 0, 00:12:43.893 "data_size": 65536 00:12:43.893 }, 00:12:43.893 { 00:12:43.893 "name": "BaseBdev3", 00:12:43.893 "uuid": "624f0924-7764-44ea-af91-1bd007d822c6", 00:12:43.893 "is_configured": true, 00:12:43.893 "data_offset": 0, 00:12:43.893 "data_size": 65536 00:12:43.893 }, 00:12:43.893 { 00:12:43.893 "name": "BaseBdev4", 00:12:43.893 "uuid": "f3aef3ce-b430-4562-a029-135176fc3fd2", 00:12:43.893 "is_configured": true, 00:12:43.893 "data_offset": 0, 00:12:43.893 "data_size": 65536 00:12:43.893 } 00:12:43.893 ] 00:12:43.893 }' 00:12:43.893 14:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:43.893 14:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.152 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.152 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:44.152 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.152 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.411 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.411 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:44.411 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:44.411 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.411 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.411 [2024-11-27 14:12:21.501356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:44.411 BaseBdev1 00:12:44.411 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.411 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:44.411 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:44.411 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:44.411 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:44.411 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:44.411 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:44.411 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:44.411 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.411 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.411 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.411 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:44.411 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.411 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.411 [ 00:12:44.411 { 00:12:44.411 "name": "BaseBdev1", 00:12:44.411 "aliases": [ 00:12:44.411 "06d4f6bf-f02c-4d6e-8c96-c3e6ff7640c6" 00:12:44.411 ], 00:12:44.411 "product_name": "Malloc disk", 00:12:44.411 "block_size": 512, 00:12:44.411 "num_blocks": 65536, 00:12:44.411 "uuid": "06d4f6bf-f02c-4d6e-8c96-c3e6ff7640c6", 00:12:44.411 "assigned_rate_limits": { 00:12:44.411 "rw_ios_per_sec": 0, 00:12:44.411 "rw_mbytes_per_sec": 0, 00:12:44.411 "r_mbytes_per_sec": 0, 00:12:44.411 "w_mbytes_per_sec": 0 00:12:44.411 }, 00:12:44.411 "claimed": true, 00:12:44.411 "claim_type": "exclusive_write", 00:12:44.411 "zoned": false, 00:12:44.411 "supported_io_types": { 00:12:44.411 "read": true, 00:12:44.411 "write": true, 00:12:44.411 "unmap": true, 00:12:44.411 "flush": true, 00:12:44.411 "reset": true, 00:12:44.411 "nvme_admin": false, 00:12:44.411 "nvme_io": false, 00:12:44.411 "nvme_io_md": false, 00:12:44.411 "write_zeroes": true, 00:12:44.411 "zcopy": true, 00:12:44.411 "get_zone_info": false, 00:12:44.411 "zone_management": false, 00:12:44.411 "zone_append": false, 00:12:44.411 "compare": false, 00:12:44.411 "compare_and_write": false, 00:12:44.411 "abort": true, 00:12:44.411 "seek_hole": false, 00:12:44.411 "seek_data": false, 00:12:44.411 "copy": true, 00:12:44.411 "nvme_iov_md": false 00:12:44.411 }, 00:12:44.411 "memory_domains": [ 00:12:44.411 { 00:12:44.411 "dma_device_id": "system", 00:12:44.411 "dma_device_type": 1 00:12:44.411 }, 00:12:44.411 { 00:12:44.411 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:44.411 "dma_device_type": 2 00:12:44.411 } 00:12:44.411 ], 00:12:44.411 "driver_specific": {} 00:12:44.411 } 00:12:44.411 ] 00:12:44.411 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.411 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:44.412 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:44.412 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.412 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.412 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:44.412 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.412 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.412 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.412 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.412 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.412 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.412 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.412 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.412 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.412 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.412 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.412 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.412 "name": "Existed_Raid", 00:12:44.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.412 "strip_size_kb": 64, 00:12:44.412 "state": "configuring", 00:12:44.412 "raid_level": "concat", 00:12:44.412 "superblock": false, 00:12:44.412 "num_base_bdevs": 4, 00:12:44.412 "num_base_bdevs_discovered": 3, 00:12:44.412 "num_base_bdevs_operational": 4, 00:12:44.412 "base_bdevs_list": [ 00:12:44.412 { 00:12:44.412 "name": "BaseBdev1", 00:12:44.412 "uuid": "06d4f6bf-f02c-4d6e-8c96-c3e6ff7640c6", 00:12:44.412 "is_configured": true, 00:12:44.412 "data_offset": 0, 00:12:44.412 "data_size": 65536 00:12:44.412 }, 00:12:44.412 { 00:12:44.412 "name": null, 00:12:44.412 "uuid": "1e1b6bb2-1d60-4c95-93d4-fd0eff815429", 00:12:44.412 "is_configured": false, 00:12:44.412 "data_offset": 0, 00:12:44.412 "data_size": 65536 00:12:44.412 }, 00:12:44.412 { 00:12:44.412 "name": "BaseBdev3", 00:12:44.412 "uuid": "624f0924-7764-44ea-af91-1bd007d822c6", 00:12:44.412 "is_configured": true, 00:12:44.412 "data_offset": 0, 00:12:44.412 "data_size": 65536 00:12:44.412 }, 00:12:44.412 { 00:12:44.412 "name": "BaseBdev4", 00:12:44.412 "uuid": "f3aef3ce-b430-4562-a029-135176fc3fd2", 00:12:44.412 "is_configured": true, 00:12:44.412 "data_offset": 0, 00:12:44.412 "data_size": 65536 00:12:44.412 } 00:12:44.412 ] 00:12:44.412 }' 00:12:44.412 14:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.412 14:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.981 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.981 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:44.981 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.981 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.981 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.981 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:44.981 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:44.981 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.981 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.981 [2024-11-27 14:12:22.113605] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:44.981 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.981 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:44.981 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:44.981 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:44.981 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:44.981 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:44.981 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:44.981 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:44.981 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:44.981 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:44.981 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:44.981 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:44.981 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:44.981 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:44.981 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:44.981 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:44.981 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:44.981 "name": "Existed_Raid", 00:12:44.981 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:44.981 "strip_size_kb": 64, 00:12:44.981 "state": "configuring", 00:12:44.981 "raid_level": "concat", 00:12:44.981 "superblock": false, 00:12:44.981 "num_base_bdevs": 4, 00:12:44.981 "num_base_bdevs_discovered": 2, 00:12:44.981 "num_base_bdevs_operational": 4, 00:12:44.981 "base_bdevs_list": [ 00:12:44.981 { 00:12:44.981 "name": "BaseBdev1", 00:12:44.981 "uuid": "06d4f6bf-f02c-4d6e-8c96-c3e6ff7640c6", 00:12:44.981 "is_configured": true, 00:12:44.981 "data_offset": 0, 00:12:44.982 "data_size": 65536 00:12:44.982 }, 00:12:44.982 { 00:12:44.982 "name": null, 00:12:44.982 "uuid": "1e1b6bb2-1d60-4c95-93d4-fd0eff815429", 00:12:44.982 "is_configured": false, 00:12:44.982 "data_offset": 0, 00:12:44.982 "data_size": 65536 00:12:44.982 }, 00:12:44.982 { 00:12:44.982 "name": null, 00:12:44.982 "uuid": "624f0924-7764-44ea-af91-1bd007d822c6", 00:12:44.982 "is_configured": false, 00:12:44.982 "data_offset": 0, 00:12:44.982 "data_size": 65536 00:12:44.982 }, 00:12:44.982 { 00:12:44.982 "name": "BaseBdev4", 00:12:44.982 "uuid": "f3aef3ce-b430-4562-a029-135176fc3fd2", 00:12:44.982 "is_configured": true, 00:12:44.982 "data_offset": 0, 00:12:44.982 "data_size": 65536 00:12:44.982 } 00:12:44.982 ] 00:12:44.982 }' 00:12:44.982 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:44.982 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.549 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.549 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.549 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.549 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:45.549 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.549 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:45.549 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:45.549 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.549 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.549 [2024-11-27 14:12:22.681776] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:45.549 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.549 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:45.549 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:45.549 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:45.549 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:45.549 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:45.549 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:45.549 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:45.549 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:45.549 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:45.549 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:45.549 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:45.549 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:45.549 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:45.549 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:45.549 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:45.549 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:45.549 "name": "Existed_Raid", 00:12:45.549 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:45.549 "strip_size_kb": 64, 00:12:45.549 "state": "configuring", 00:12:45.549 "raid_level": "concat", 00:12:45.549 "superblock": false, 00:12:45.549 "num_base_bdevs": 4, 00:12:45.549 "num_base_bdevs_discovered": 3, 00:12:45.549 "num_base_bdevs_operational": 4, 00:12:45.549 "base_bdevs_list": [ 00:12:45.549 { 00:12:45.549 "name": "BaseBdev1", 00:12:45.549 "uuid": "06d4f6bf-f02c-4d6e-8c96-c3e6ff7640c6", 00:12:45.549 "is_configured": true, 00:12:45.549 "data_offset": 0, 00:12:45.549 "data_size": 65536 00:12:45.549 }, 00:12:45.549 { 00:12:45.549 "name": null, 00:12:45.549 "uuid": "1e1b6bb2-1d60-4c95-93d4-fd0eff815429", 00:12:45.549 "is_configured": false, 00:12:45.549 "data_offset": 0, 00:12:45.549 "data_size": 65536 00:12:45.549 }, 00:12:45.549 { 00:12:45.549 "name": "BaseBdev3", 00:12:45.549 "uuid": "624f0924-7764-44ea-af91-1bd007d822c6", 00:12:45.549 "is_configured": true, 00:12:45.549 "data_offset": 0, 00:12:45.549 "data_size": 65536 00:12:45.549 }, 00:12:45.549 { 00:12:45.549 "name": "BaseBdev4", 00:12:45.549 "uuid": "f3aef3ce-b430-4562-a029-135176fc3fd2", 00:12:45.549 "is_configured": true, 00:12:45.549 "data_offset": 0, 00:12:45.549 "data_size": 65536 00:12:45.549 } 00:12:45.549 ] 00:12:45.549 }' 00:12:45.549 14:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:45.549 14:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.117 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.117 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.117 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.117 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:46.117 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.117 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:46.117 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:46.117 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.117 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.117 [2024-11-27 14:12:23.261992] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:46.117 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.117 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:46.117 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.117 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.117 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:46.117 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.117 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:46.117 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.117 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.117 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.117 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.117 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.117 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.117 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.117 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.117 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.376 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.376 "name": "Existed_Raid", 00:12:46.376 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.376 "strip_size_kb": 64, 00:12:46.376 "state": "configuring", 00:12:46.376 "raid_level": "concat", 00:12:46.376 "superblock": false, 00:12:46.376 "num_base_bdevs": 4, 00:12:46.376 "num_base_bdevs_discovered": 2, 00:12:46.376 "num_base_bdevs_operational": 4, 00:12:46.376 "base_bdevs_list": [ 00:12:46.376 { 00:12:46.376 "name": null, 00:12:46.376 "uuid": "06d4f6bf-f02c-4d6e-8c96-c3e6ff7640c6", 00:12:46.376 "is_configured": false, 00:12:46.376 "data_offset": 0, 00:12:46.376 "data_size": 65536 00:12:46.376 }, 00:12:46.376 { 00:12:46.376 "name": null, 00:12:46.376 "uuid": "1e1b6bb2-1d60-4c95-93d4-fd0eff815429", 00:12:46.376 "is_configured": false, 00:12:46.376 "data_offset": 0, 00:12:46.376 "data_size": 65536 00:12:46.376 }, 00:12:46.376 { 00:12:46.376 "name": "BaseBdev3", 00:12:46.376 "uuid": "624f0924-7764-44ea-af91-1bd007d822c6", 00:12:46.376 "is_configured": true, 00:12:46.376 "data_offset": 0, 00:12:46.376 "data_size": 65536 00:12:46.376 }, 00:12:46.376 { 00:12:46.376 "name": "BaseBdev4", 00:12:46.376 "uuid": "f3aef3ce-b430-4562-a029-135176fc3fd2", 00:12:46.376 "is_configured": true, 00:12:46.376 "data_offset": 0, 00:12:46.376 "data_size": 65536 00:12:46.376 } 00:12:46.376 ] 00:12:46.376 }' 00:12:46.376 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.376 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.635 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.635 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.635 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.635 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:46.894 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.895 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:12:46.895 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:46.895 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.895 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.895 [2024-11-27 14:12:23.961198] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:46.895 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.895 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:46.895 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:46.895 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:46.895 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:46.895 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:46.895 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:46.895 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:46.895 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:46.895 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:46.895 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:46.895 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:46.895 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:46.895 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.895 14:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:46.895 14:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:46.895 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:46.895 "name": "Existed_Raid", 00:12:46.895 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:46.895 "strip_size_kb": 64, 00:12:46.895 "state": "configuring", 00:12:46.895 "raid_level": "concat", 00:12:46.895 "superblock": false, 00:12:46.895 "num_base_bdevs": 4, 00:12:46.895 "num_base_bdevs_discovered": 3, 00:12:46.895 "num_base_bdevs_operational": 4, 00:12:46.895 "base_bdevs_list": [ 00:12:46.895 { 00:12:46.895 "name": null, 00:12:46.895 "uuid": "06d4f6bf-f02c-4d6e-8c96-c3e6ff7640c6", 00:12:46.895 "is_configured": false, 00:12:46.895 "data_offset": 0, 00:12:46.895 "data_size": 65536 00:12:46.895 }, 00:12:46.895 { 00:12:46.895 "name": "BaseBdev2", 00:12:46.895 "uuid": "1e1b6bb2-1d60-4c95-93d4-fd0eff815429", 00:12:46.895 "is_configured": true, 00:12:46.895 "data_offset": 0, 00:12:46.895 "data_size": 65536 00:12:46.895 }, 00:12:46.895 { 00:12:46.895 "name": "BaseBdev3", 00:12:46.895 "uuid": "624f0924-7764-44ea-af91-1bd007d822c6", 00:12:46.895 "is_configured": true, 00:12:46.895 "data_offset": 0, 00:12:46.895 "data_size": 65536 00:12:46.895 }, 00:12:46.895 { 00:12:46.895 "name": "BaseBdev4", 00:12:46.895 "uuid": "f3aef3ce-b430-4562-a029-135176fc3fd2", 00:12:46.895 "is_configured": true, 00:12:46.895 "data_offset": 0, 00:12:46.895 "data_size": 65536 00:12:46.895 } 00:12:46.895 ] 00:12:46.895 }' 00:12:46.895 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:46.895 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 06d4f6bf-f02c-4d6e-8c96-c3e6ff7640c6 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.463 [2024-11-27 14:12:24.631643] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:47.463 [2024-11-27 14:12:24.631722] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:12:47.463 [2024-11-27 14:12:24.631734] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:47.463 [2024-11-27 14:12:24.632111] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:12:47.463 [2024-11-27 14:12:24.632297] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:12:47.463 [2024-11-27 14:12:24.632326] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:12:47.463 [2024-11-27 14:12:24.632619] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:47.463 NewBaseBdev 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.463 [ 00:12:47.463 { 00:12:47.463 "name": "NewBaseBdev", 00:12:47.463 "aliases": [ 00:12:47.463 "06d4f6bf-f02c-4d6e-8c96-c3e6ff7640c6" 00:12:47.463 ], 00:12:47.463 "product_name": "Malloc disk", 00:12:47.463 "block_size": 512, 00:12:47.463 "num_blocks": 65536, 00:12:47.463 "uuid": "06d4f6bf-f02c-4d6e-8c96-c3e6ff7640c6", 00:12:47.463 "assigned_rate_limits": { 00:12:47.463 "rw_ios_per_sec": 0, 00:12:47.463 "rw_mbytes_per_sec": 0, 00:12:47.463 "r_mbytes_per_sec": 0, 00:12:47.463 "w_mbytes_per_sec": 0 00:12:47.463 }, 00:12:47.463 "claimed": true, 00:12:47.463 "claim_type": "exclusive_write", 00:12:47.463 "zoned": false, 00:12:47.463 "supported_io_types": { 00:12:47.463 "read": true, 00:12:47.463 "write": true, 00:12:47.463 "unmap": true, 00:12:47.463 "flush": true, 00:12:47.463 "reset": true, 00:12:47.463 "nvme_admin": false, 00:12:47.463 "nvme_io": false, 00:12:47.463 "nvme_io_md": false, 00:12:47.463 "write_zeroes": true, 00:12:47.463 "zcopy": true, 00:12:47.463 "get_zone_info": false, 00:12:47.463 "zone_management": false, 00:12:47.463 "zone_append": false, 00:12:47.463 "compare": false, 00:12:47.463 "compare_and_write": false, 00:12:47.463 "abort": true, 00:12:47.463 "seek_hole": false, 00:12:47.463 "seek_data": false, 00:12:47.463 "copy": true, 00:12:47.463 "nvme_iov_md": false 00:12:47.463 }, 00:12:47.463 "memory_domains": [ 00:12:47.463 { 00:12:47.463 "dma_device_id": "system", 00:12:47.463 "dma_device_type": 1 00:12:47.463 }, 00:12:47.463 { 00:12:47.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:47.463 "dma_device_type": 2 00:12:47.463 } 00:12:47.463 ], 00:12:47.463 "driver_specific": {} 00:12:47.463 } 00:12:47.463 ] 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:47.463 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:47.463 "name": "Existed_Raid", 00:12:47.463 "uuid": "efa4b62b-4a90-4a83-9feb-100bc5a3e786", 00:12:47.463 "strip_size_kb": 64, 00:12:47.463 "state": "online", 00:12:47.463 "raid_level": "concat", 00:12:47.463 "superblock": false, 00:12:47.463 "num_base_bdevs": 4, 00:12:47.463 "num_base_bdevs_discovered": 4, 00:12:47.463 "num_base_bdevs_operational": 4, 00:12:47.463 "base_bdevs_list": [ 00:12:47.463 { 00:12:47.463 "name": "NewBaseBdev", 00:12:47.463 "uuid": "06d4f6bf-f02c-4d6e-8c96-c3e6ff7640c6", 00:12:47.463 "is_configured": true, 00:12:47.463 "data_offset": 0, 00:12:47.464 "data_size": 65536 00:12:47.464 }, 00:12:47.464 { 00:12:47.464 "name": "BaseBdev2", 00:12:47.464 "uuid": "1e1b6bb2-1d60-4c95-93d4-fd0eff815429", 00:12:47.464 "is_configured": true, 00:12:47.464 "data_offset": 0, 00:12:47.464 "data_size": 65536 00:12:47.464 }, 00:12:47.464 { 00:12:47.464 "name": "BaseBdev3", 00:12:47.464 "uuid": "624f0924-7764-44ea-af91-1bd007d822c6", 00:12:47.464 "is_configured": true, 00:12:47.464 "data_offset": 0, 00:12:47.464 "data_size": 65536 00:12:47.464 }, 00:12:47.464 { 00:12:47.464 "name": "BaseBdev4", 00:12:47.464 "uuid": "f3aef3ce-b430-4562-a029-135176fc3fd2", 00:12:47.464 "is_configured": true, 00:12:47.464 "data_offset": 0, 00:12:47.464 "data_size": 65536 00:12:47.464 } 00:12:47.464 ] 00:12:47.464 }' 00:12:47.464 14:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:47.464 14:12:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.032 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:12:48.032 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:48.032 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:48.032 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:48.032 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:12:48.032 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:48.032 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:48.032 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:48.032 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.032 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.032 [2024-11-27 14:12:25.212389] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:48.032 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.032 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:48.032 "name": "Existed_Raid", 00:12:48.032 "aliases": [ 00:12:48.032 "efa4b62b-4a90-4a83-9feb-100bc5a3e786" 00:12:48.032 ], 00:12:48.032 "product_name": "Raid Volume", 00:12:48.032 "block_size": 512, 00:12:48.032 "num_blocks": 262144, 00:12:48.032 "uuid": "efa4b62b-4a90-4a83-9feb-100bc5a3e786", 00:12:48.032 "assigned_rate_limits": { 00:12:48.032 "rw_ios_per_sec": 0, 00:12:48.032 "rw_mbytes_per_sec": 0, 00:12:48.032 "r_mbytes_per_sec": 0, 00:12:48.032 "w_mbytes_per_sec": 0 00:12:48.032 }, 00:12:48.032 "claimed": false, 00:12:48.032 "zoned": false, 00:12:48.032 "supported_io_types": { 00:12:48.032 "read": true, 00:12:48.032 "write": true, 00:12:48.032 "unmap": true, 00:12:48.032 "flush": true, 00:12:48.032 "reset": true, 00:12:48.032 "nvme_admin": false, 00:12:48.032 "nvme_io": false, 00:12:48.032 "nvme_io_md": false, 00:12:48.032 "write_zeroes": true, 00:12:48.032 "zcopy": false, 00:12:48.032 "get_zone_info": false, 00:12:48.032 "zone_management": false, 00:12:48.032 "zone_append": false, 00:12:48.032 "compare": false, 00:12:48.032 "compare_and_write": false, 00:12:48.032 "abort": false, 00:12:48.032 "seek_hole": false, 00:12:48.032 "seek_data": false, 00:12:48.032 "copy": false, 00:12:48.032 "nvme_iov_md": false 00:12:48.032 }, 00:12:48.032 "memory_domains": [ 00:12:48.032 { 00:12:48.032 "dma_device_id": "system", 00:12:48.032 "dma_device_type": 1 00:12:48.032 }, 00:12:48.032 { 00:12:48.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.032 "dma_device_type": 2 00:12:48.032 }, 00:12:48.032 { 00:12:48.032 "dma_device_id": "system", 00:12:48.032 "dma_device_type": 1 00:12:48.032 }, 00:12:48.032 { 00:12:48.032 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.033 "dma_device_type": 2 00:12:48.033 }, 00:12:48.033 { 00:12:48.033 "dma_device_id": "system", 00:12:48.033 "dma_device_type": 1 00:12:48.033 }, 00:12:48.033 { 00:12:48.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.033 "dma_device_type": 2 00:12:48.033 }, 00:12:48.033 { 00:12:48.033 "dma_device_id": "system", 00:12:48.033 "dma_device_type": 1 00:12:48.033 }, 00:12:48.033 { 00:12:48.033 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:48.033 "dma_device_type": 2 00:12:48.033 } 00:12:48.033 ], 00:12:48.033 "driver_specific": { 00:12:48.033 "raid": { 00:12:48.033 "uuid": "efa4b62b-4a90-4a83-9feb-100bc5a3e786", 00:12:48.033 "strip_size_kb": 64, 00:12:48.033 "state": "online", 00:12:48.033 "raid_level": "concat", 00:12:48.033 "superblock": false, 00:12:48.033 "num_base_bdevs": 4, 00:12:48.033 "num_base_bdevs_discovered": 4, 00:12:48.033 "num_base_bdevs_operational": 4, 00:12:48.033 "base_bdevs_list": [ 00:12:48.033 { 00:12:48.033 "name": "NewBaseBdev", 00:12:48.033 "uuid": "06d4f6bf-f02c-4d6e-8c96-c3e6ff7640c6", 00:12:48.033 "is_configured": true, 00:12:48.033 "data_offset": 0, 00:12:48.033 "data_size": 65536 00:12:48.033 }, 00:12:48.033 { 00:12:48.033 "name": "BaseBdev2", 00:12:48.033 "uuid": "1e1b6bb2-1d60-4c95-93d4-fd0eff815429", 00:12:48.033 "is_configured": true, 00:12:48.033 "data_offset": 0, 00:12:48.033 "data_size": 65536 00:12:48.033 }, 00:12:48.033 { 00:12:48.033 "name": "BaseBdev3", 00:12:48.033 "uuid": "624f0924-7764-44ea-af91-1bd007d822c6", 00:12:48.033 "is_configured": true, 00:12:48.033 "data_offset": 0, 00:12:48.033 "data_size": 65536 00:12:48.033 }, 00:12:48.033 { 00:12:48.033 "name": "BaseBdev4", 00:12:48.033 "uuid": "f3aef3ce-b430-4562-a029-135176fc3fd2", 00:12:48.033 "is_configured": true, 00:12:48.033 "data_offset": 0, 00:12:48.033 "data_size": 65536 00:12:48.033 } 00:12:48.033 ] 00:12:48.033 } 00:12:48.033 } 00:12:48.033 }' 00:12:48.033 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:48.291 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:12:48.291 BaseBdev2 00:12:48.291 BaseBdev3 00:12:48.291 BaseBdev4' 00:12:48.291 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.291 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:48.291 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:48.291 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:12:48.291 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.291 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.291 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.291 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.291 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:48.291 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:48.291 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:48.291 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.291 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:48.291 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.291 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.291 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.291 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:48.291 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:48.291 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:48.291 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:48.291 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.291 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.291 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.291 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.291 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:48.291 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:48.291 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:48.291 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:48.291 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:48.291 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.291 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.291 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.550 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:48.550 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:48.550 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:48.550 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:48.550 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.550 [2024-11-27 14:12:25.579988] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:48.550 [2024-11-27 14:12:25.580146] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:48.550 [2024-11-27 14:12:25.580343] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:48.550 [2024-11-27 14:12:25.580548] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:48.550 [2024-11-27 14:12:25.580660] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:12:48.550 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:48.550 14:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 71297 00:12:48.550 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 71297 ']' 00:12:48.550 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 71297 00:12:48.550 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:12:48.550 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:48.550 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71297 00:12:48.550 killing process with pid 71297 00:12:48.550 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:48.550 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:48.550 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71297' 00:12:48.550 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 71297 00:12:48.550 [2024-11-27 14:12:25.618206] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:48.550 14:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 71297 00:12:48.809 [2024-11-27 14:12:25.969934] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:49.744 14:12:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:12:49.744 00:12:49.744 real 0m13.022s 00:12:49.744 user 0m21.711s 00:12:49.744 sys 0m1.814s 00:12:49.744 14:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:49.744 ************************************ 00:12:49.744 END TEST raid_state_function_test 00:12:49.744 ************************************ 00:12:49.744 14:12:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:50.003 14:12:27 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:12:50.003 14:12:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:12:50.003 14:12:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:50.003 14:12:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:50.003 ************************************ 00:12:50.003 START TEST raid_state_function_test_sb 00:12:50.003 ************************************ 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test concat 4 true 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=concat 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:12:50.003 Process raid pid: 71984 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' concat '!=' raid1 ']' 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=71984 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 71984' 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 71984 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 71984 ']' 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:50.003 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:50.003 14:12:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:50.003 [2024-11-27 14:12:27.184427] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:12:50.004 [2024-11-27 14:12:27.186025] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:50.262 [2024-11-27 14:12:27.372951] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:50.262 [2024-11-27 14:12:27.531480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:50.520 [2024-11-27 14:12:27.741389] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:50.520 [2024-11-27 14:12:27.741437] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:51.089 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:51.089 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:12:51.089 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:51.089 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.089 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.089 [2024-11-27 14:12:28.272970] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:51.089 [2024-11-27 14:12:28.273164] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:51.089 [2024-11-27 14:12:28.273293] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:51.089 [2024-11-27 14:12:28.273432] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:51.089 [2024-11-27 14:12:28.273551] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:51.089 [2024-11-27 14:12:28.273683] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:51.089 [2024-11-27 14:12:28.273814] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:51.089 [2024-11-27 14:12:28.273940] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:51.089 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.089 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:51.089 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.089 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:51.089 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:51.089 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.089 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:51.089 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.089 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.089 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.089 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.089 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.089 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.089 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.089 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.089 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.089 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.089 "name": "Existed_Raid", 00:12:51.089 "uuid": "ebdc3d05-7ca1-4e08-be53-5ab5e40ed773", 00:12:51.089 "strip_size_kb": 64, 00:12:51.089 "state": "configuring", 00:12:51.089 "raid_level": "concat", 00:12:51.089 "superblock": true, 00:12:51.089 "num_base_bdevs": 4, 00:12:51.089 "num_base_bdevs_discovered": 0, 00:12:51.089 "num_base_bdevs_operational": 4, 00:12:51.089 "base_bdevs_list": [ 00:12:51.089 { 00:12:51.089 "name": "BaseBdev1", 00:12:51.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.089 "is_configured": false, 00:12:51.089 "data_offset": 0, 00:12:51.089 "data_size": 0 00:12:51.089 }, 00:12:51.089 { 00:12:51.089 "name": "BaseBdev2", 00:12:51.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.089 "is_configured": false, 00:12:51.089 "data_offset": 0, 00:12:51.089 "data_size": 0 00:12:51.089 }, 00:12:51.089 { 00:12:51.089 "name": "BaseBdev3", 00:12:51.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.089 "is_configured": false, 00:12:51.089 "data_offset": 0, 00:12:51.089 "data_size": 0 00:12:51.089 }, 00:12:51.089 { 00:12:51.089 "name": "BaseBdev4", 00:12:51.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.089 "is_configured": false, 00:12:51.089 "data_offset": 0, 00:12:51.089 "data_size": 0 00:12:51.089 } 00:12:51.089 ] 00:12:51.089 }' 00:12:51.089 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.089 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.675 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:51.675 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.675 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.675 [2024-11-27 14:12:28.777065] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:51.675 [2024-11-27 14:12:28.777327] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:12:51.675 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.675 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:51.675 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.675 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.675 [2024-11-27 14:12:28.785118] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:51.675 [2024-11-27 14:12:28.785355] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:51.675 [2024-11-27 14:12:28.785493] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:51.675 [2024-11-27 14:12:28.785638] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:51.675 [2024-11-27 14:12:28.785764] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:51.675 [2024-11-27 14:12:28.785949] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:51.675 [2024-11-27 14:12:28.786056] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:51.676 [2024-11-27 14:12:28.786255] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.676 [2024-11-27 14:12:28.828758] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:51.676 BaseBdev1 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.676 [ 00:12:51.676 { 00:12:51.676 "name": "BaseBdev1", 00:12:51.676 "aliases": [ 00:12:51.676 "2da480a6-0ffe-4e3e-9607-d52bfc895e71" 00:12:51.676 ], 00:12:51.676 "product_name": "Malloc disk", 00:12:51.676 "block_size": 512, 00:12:51.676 "num_blocks": 65536, 00:12:51.676 "uuid": "2da480a6-0ffe-4e3e-9607-d52bfc895e71", 00:12:51.676 "assigned_rate_limits": { 00:12:51.676 "rw_ios_per_sec": 0, 00:12:51.676 "rw_mbytes_per_sec": 0, 00:12:51.676 "r_mbytes_per_sec": 0, 00:12:51.676 "w_mbytes_per_sec": 0 00:12:51.676 }, 00:12:51.676 "claimed": true, 00:12:51.676 "claim_type": "exclusive_write", 00:12:51.676 "zoned": false, 00:12:51.676 "supported_io_types": { 00:12:51.676 "read": true, 00:12:51.676 "write": true, 00:12:51.676 "unmap": true, 00:12:51.676 "flush": true, 00:12:51.676 "reset": true, 00:12:51.676 "nvme_admin": false, 00:12:51.676 "nvme_io": false, 00:12:51.676 "nvme_io_md": false, 00:12:51.676 "write_zeroes": true, 00:12:51.676 "zcopy": true, 00:12:51.676 "get_zone_info": false, 00:12:51.676 "zone_management": false, 00:12:51.676 "zone_append": false, 00:12:51.676 "compare": false, 00:12:51.676 "compare_and_write": false, 00:12:51.676 "abort": true, 00:12:51.676 "seek_hole": false, 00:12:51.676 "seek_data": false, 00:12:51.676 "copy": true, 00:12:51.676 "nvme_iov_md": false 00:12:51.676 }, 00:12:51.676 "memory_domains": [ 00:12:51.676 { 00:12:51.676 "dma_device_id": "system", 00:12:51.676 "dma_device_type": 1 00:12:51.676 }, 00:12:51.676 { 00:12:51.676 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:51.676 "dma_device_type": 2 00:12:51.676 } 00:12:51.676 ], 00:12:51.676 "driver_specific": {} 00:12:51.676 } 00:12:51.676 ] 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:51.676 "name": "Existed_Raid", 00:12:51.676 "uuid": "78b008b5-8939-4ca9-a1a9-85ee87781b96", 00:12:51.676 "strip_size_kb": 64, 00:12:51.676 "state": "configuring", 00:12:51.676 "raid_level": "concat", 00:12:51.676 "superblock": true, 00:12:51.676 "num_base_bdevs": 4, 00:12:51.676 "num_base_bdevs_discovered": 1, 00:12:51.676 "num_base_bdevs_operational": 4, 00:12:51.676 "base_bdevs_list": [ 00:12:51.676 { 00:12:51.676 "name": "BaseBdev1", 00:12:51.676 "uuid": "2da480a6-0ffe-4e3e-9607-d52bfc895e71", 00:12:51.676 "is_configured": true, 00:12:51.676 "data_offset": 2048, 00:12:51.676 "data_size": 63488 00:12:51.676 }, 00:12:51.676 { 00:12:51.676 "name": "BaseBdev2", 00:12:51.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.676 "is_configured": false, 00:12:51.676 "data_offset": 0, 00:12:51.676 "data_size": 0 00:12:51.676 }, 00:12:51.676 { 00:12:51.676 "name": "BaseBdev3", 00:12:51.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.676 "is_configured": false, 00:12:51.676 "data_offset": 0, 00:12:51.676 "data_size": 0 00:12:51.676 }, 00:12:51.676 { 00:12:51.676 "name": "BaseBdev4", 00:12:51.676 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:51.676 "is_configured": false, 00:12:51.676 "data_offset": 0, 00:12:51.676 "data_size": 0 00:12:51.676 } 00:12:51.676 ] 00:12:51.676 }' 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:51.676 14:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.243 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:12:52.243 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.243 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.243 [2024-11-27 14:12:29.377041] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:52.243 [2024-11-27 14:12:29.377307] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:12:52.243 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.243 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:52.243 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.243 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.243 [2024-11-27 14:12:29.385103] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:52.243 [2024-11-27 14:12:29.387768] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:52.243 [2024-11-27 14:12:29.388007] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:52.243 [2024-11-27 14:12:29.388135] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:52.243 [2024-11-27 14:12:29.388306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:52.243 [2024-11-27 14:12:29.388422] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:52.243 [2024-11-27 14:12:29.388492] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:52.243 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.243 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:12:52.243 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:52.243 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:52.243 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.243 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.243 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:52.243 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.243 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:52.243 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.243 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.243 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.243 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.243 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.243 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.243 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.243 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.243 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.243 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.243 "name": "Existed_Raid", 00:12:52.243 "uuid": "ce8b219f-751b-4b18-a753-cbb963260b30", 00:12:52.243 "strip_size_kb": 64, 00:12:52.243 "state": "configuring", 00:12:52.243 "raid_level": "concat", 00:12:52.243 "superblock": true, 00:12:52.243 "num_base_bdevs": 4, 00:12:52.243 "num_base_bdevs_discovered": 1, 00:12:52.243 "num_base_bdevs_operational": 4, 00:12:52.243 "base_bdevs_list": [ 00:12:52.243 { 00:12:52.243 "name": "BaseBdev1", 00:12:52.243 "uuid": "2da480a6-0ffe-4e3e-9607-d52bfc895e71", 00:12:52.243 "is_configured": true, 00:12:52.243 "data_offset": 2048, 00:12:52.243 "data_size": 63488 00:12:52.243 }, 00:12:52.243 { 00:12:52.243 "name": "BaseBdev2", 00:12:52.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.243 "is_configured": false, 00:12:52.243 "data_offset": 0, 00:12:52.243 "data_size": 0 00:12:52.243 }, 00:12:52.243 { 00:12:52.243 "name": "BaseBdev3", 00:12:52.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.243 "is_configured": false, 00:12:52.243 "data_offset": 0, 00:12:52.243 "data_size": 0 00:12:52.243 }, 00:12:52.243 { 00:12:52.243 "name": "BaseBdev4", 00:12:52.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.243 "is_configured": false, 00:12:52.243 "data_offset": 0, 00:12:52.243 "data_size": 0 00:12:52.243 } 00:12:52.243 ] 00:12:52.243 }' 00:12:52.243 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.243 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.829 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:52.829 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.829 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.829 [2024-11-27 14:12:29.985305] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:52.829 BaseBdev2 00:12:52.829 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.829 14:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:12:52.829 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:52.829 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:52.829 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:52.829 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:52.829 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:52.829 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:52.829 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.829 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.829 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.829 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:52.829 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.829 14:12:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.829 [ 00:12:52.829 { 00:12:52.829 "name": "BaseBdev2", 00:12:52.829 "aliases": [ 00:12:52.829 "5d15106c-c303-4ccd-8ead-d8e4ab2ebdc1" 00:12:52.829 ], 00:12:52.829 "product_name": "Malloc disk", 00:12:52.829 "block_size": 512, 00:12:52.829 "num_blocks": 65536, 00:12:52.829 "uuid": "5d15106c-c303-4ccd-8ead-d8e4ab2ebdc1", 00:12:52.829 "assigned_rate_limits": { 00:12:52.829 "rw_ios_per_sec": 0, 00:12:52.829 "rw_mbytes_per_sec": 0, 00:12:52.829 "r_mbytes_per_sec": 0, 00:12:52.829 "w_mbytes_per_sec": 0 00:12:52.829 }, 00:12:52.829 "claimed": true, 00:12:52.829 "claim_type": "exclusive_write", 00:12:52.829 "zoned": false, 00:12:52.829 "supported_io_types": { 00:12:52.829 "read": true, 00:12:52.829 "write": true, 00:12:52.829 "unmap": true, 00:12:52.829 "flush": true, 00:12:52.829 "reset": true, 00:12:52.829 "nvme_admin": false, 00:12:52.829 "nvme_io": false, 00:12:52.829 "nvme_io_md": false, 00:12:52.829 "write_zeroes": true, 00:12:52.829 "zcopy": true, 00:12:52.829 "get_zone_info": false, 00:12:52.829 "zone_management": false, 00:12:52.829 "zone_append": false, 00:12:52.829 "compare": false, 00:12:52.829 "compare_and_write": false, 00:12:52.829 "abort": true, 00:12:52.829 "seek_hole": false, 00:12:52.829 "seek_data": false, 00:12:52.829 "copy": true, 00:12:52.829 "nvme_iov_md": false 00:12:52.829 }, 00:12:52.829 "memory_domains": [ 00:12:52.829 { 00:12:52.829 "dma_device_id": "system", 00:12:52.829 "dma_device_type": 1 00:12:52.829 }, 00:12:52.829 { 00:12:52.829 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.829 "dma_device_type": 2 00:12:52.829 } 00:12:52.829 ], 00:12:52.829 "driver_specific": {} 00:12:52.829 } 00:12:52.829 ] 00:12:52.829 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.829 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:52.829 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:52.829 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:52.829 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:52.829 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:52.829 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:52.829 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:52.829 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:52.829 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:52.829 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:52.829 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:52.829 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:52.829 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:52.829 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.829 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:52.829 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:52.829 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:52.829 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:52.829 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:52.829 "name": "Existed_Raid", 00:12:52.829 "uuid": "ce8b219f-751b-4b18-a753-cbb963260b30", 00:12:52.829 "strip_size_kb": 64, 00:12:52.829 "state": "configuring", 00:12:52.829 "raid_level": "concat", 00:12:52.829 "superblock": true, 00:12:52.829 "num_base_bdevs": 4, 00:12:52.829 "num_base_bdevs_discovered": 2, 00:12:52.829 "num_base_bdevs_operational": 4, 00:12:52.829 "base_bdevs_list": [ 00:12:52.829 { 00:12:52.829 "name": "BaseBdev1", 00:12:52.829 "uuid": "2da480a6-0ffe-4e3e-9607-d52bfc895e71", 00:12:52.829 "is_configured": true, 00:12:52.829 "data_offset": 2048, 00:12:52.829 "data_size": 63488 00:12:52.829 }, 00:12:52.829 { 00:12:52.829 "name": "BaseBdev2", 00:12:52.829 "uuid": "5d15106c-c303-4ccd-8ead-d8e4ab2ebdc1", 00:12:52.829 "is_configured": true, 00:12:52.829 "data_offset": 2048, 00:12:52.829 "data_size": 63488 00:12:52.829 }, 00:12:52.829 { 00:12:52.829 "name": "BaseBdev3", 00:12:52.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.829 "is_configured": false, 00:12:52.829 "data_offset": 0, 00:12:52.829 "data_size": 0 00:12:52.829 }, 00:12:52.829 { 00:12:52.829 "name": "BaseBdev4", 00:12:52.829 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.829 "is_configured": false, 00:12:52.829 "data_offset": 0, 00:12:52.829 "data_size": 0 00:12:52.829 } 00:12:52.829 ] 00:12:52.829 }' 00:12:52.829 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:52.829 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.397 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:53.397 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.397 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.397 BaseBdev3 00:12:53.397 [2024-11-27 14:12:30.620102] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:53.397 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.398 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:12:53.398 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:53.398 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:53.398 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:53.398 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:53.398 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:53.398 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:53.398 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.398 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.398 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.398 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:53.398 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.398 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.398 [ 00:12:53.398 { 00:12:53.398 "name": "BaseBdev3", 00:12:53.398 "aliases": [ 00:12:53.398 "ee606bfe-917a-449c-ac64-c94b4695d37a" 00:12:53.398 ], 00:12:53.398 "product_name": "Malloc disk", 00:12:53.398 "block_size": 512, 00:12:53.398 "num_blocks": 65536, 00:12:53.398 "uuid": "ee606bfe-917a-449c-ac64-c94b4695d37a", 00:12:53.398 "assigned_rate_limits": { 00:12:53.398 "rw_ios_per_sec": 0, 00:12:53.398 "rw_mbytes_per_sec": 0, 00:12:53.398 "r_mbytes_per_sec": 0, 00:12:53.398 "w_mbytes_per_sec": 0 00:12:53.398 }, 00:12:53.398 "claimed": true, 00:12:53.398 "claim_type": "exclusive_write", 00:12:53.398 "zoned": false, 00:12:53.398 "supported_io_types": { 00:12:53.398 "read": true, 00:12:53.398 "write": true, 00:12:53.398 "unmap": true, 00:12:53.398 "flush": true, 00:12:53.398 "reset": true, 00:12:53.398 "nvme_admin": false, 00:12:53.398 "nvme_io": false, 00:12:53.398 "nvme_io_md": false, 00:12:53.398 "write_zeroes": true, 00:12:53.398 "zcopy": true, 00:12:53.398 "get_zone_info": false, 00:12:53.398 "zone_management": false, 00:12:53.398 "zone_append": false, 00:12:53.398 "compare": false, 00:12:53.398 "compare_and_write": false, 00:12:53.398 "abort": true, 00:12:53.398 "seek_hole": false, 00:12:53.398 "seek_data": false, 00:12:53.398 "copy": true, 00:12:53.398 "nvme_iov_md": false 00:12:53.398 }, 00:12:53.398 "memory_domains": [ 00:12:53.398 { 00:12:53.398 "dma_device_id": "system", 00:12:53.398 "dma_device_type": 1 00:12:53.398 }, 00:12:53.398 { 00:12:53.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:53.398 "dma_device_type": 2 00:12:53.398 } 00:12:53.398 ], 00:12:53.398 "driver_specific": {} 00:12:53.398 } 00:12:53.398 ] 00:12:53.398 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.398 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:53.398 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:53.398 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:53.398 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:53.398 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:53.398 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:53.398 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:53.398 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:53.398 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:53.398 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:53.398 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:53.398 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:53.398 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:53.398 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:53.398 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.398 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.398 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.656 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:53.656 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:53.656 "name": "Existed_Raid", 00:12:53.656 "uuid": "ce8b219f-751b-4b18-a753-cbb963260b30", 00:12:53.656 "strip_size_kb": 64, 00:12:53.656 "state": "configuring", 00:12:53.656 "raid_level": "concat", 00:12:53.656 "superblock": true, 00:12:53.656 "num_base_bdevs": 4, 00:12:53.656 "num_base_bdevs_discovered": 3, 00:12:53.656 "num_base_bdevs_operational": 4, 00:12:53.656 "base_bdevs_list": [ 00:12:53.656 { 00:12:53.656 "name": "BaseBdev1", 00:12:53.656 "uuid": "2da480a6-0ffe-4e3e-9607-d52bfc895e71", 00:12:53.656 "is_configured": true, 00:12:53.656 "data_offset": 2048, 00:12:53.656 "data_size": 63488 00:12:53.656 }, 00:12:53.656 { 00:12:53.656 "name": "BaseBdev2", 00:12:53.656 "uuid": "5d15106c-c303-4ccd-8ead-d8e4ab2ebdc1", 00:12:53.656 "is_configured": true, 00:12:53.656 "data_offset": 2048, 00:12:53.656 "data_size": 63488 00:12:53.656 }, 00:12:53.656 { 00:12:53.656 "name": "BaseBdev3", 00:12:53.656 "uuid": "ee606bfe-917a-449c-ac64-c94b4695d37a", 00:12:53.656 "is_configured": true, 00:12:53.656 "data_offset": 2048, 00:12:53.656 "data_size": 63488 00:12:53.656 }, 00:12:53.656 { 00:12:53.656 "name": "BaseBdev4", 00:12:53.656 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.656 "is_configured": false, 00:12:53.656 "data_offset": 0, 00:12:53.656 "data_size": 0 00:12:53.656 } 00:12:53.656 ] 00:12:53.657 }' 00:12:53.657 14:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:53.657 14:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:53.915 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:53.915 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:53.915 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.174 [2024-11-27 14:12:31.213485] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:54.174 [2024-11-27 14:12:31.213902] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:12:54.174 [2024-11-27 14:12:31.213924] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:12:54.174 BaseBdev4 00:12:54.174 [2024-11-27 14:12:31.214265] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:12:54.174 [2024-11-27 14:12:31.214499] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:12:54.174 [2024-11-27 14:12:31.214520] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:12:54.174 [2024-11-27 14:12:31.214727] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:54.174 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.174 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:12:54.174 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:54.174 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:54.174 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:54.174 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:54.174 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:54.174 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:54.174 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.174 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.174 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.174 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:54.174 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.174 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.174 [ 00:12:54.174 { 00:12:54.174 "name": "BaseBdev4", 00:12:54.174 "aliases": [ 00:12:54.174 "87b5d491-3417-4005-85bd-04d157b81d68" 00:12:54.174 ], 00:12:54.174 "product_name": "Malloc disk", 00:12:54.174 "block_size": 512, 00:12:54.174 "num_blocks": 65536, 00:12:54.174 "uuid": "87b5d491-3417-4005-85bd-04d157b81d68", 00:12:54.174 "assigned_rate_limits": { 00:12:54.174 "rw_ios_per_sec": 0, 00:12:54.174 "rw_mbytes_per_sec": 0, 00:12:54.174 "r_mbytes_per_sec": 0, 00:12:54.174 "w_mbytes_per_sec": 0 00:12:54.174 }, 00:12:54.174 "claimed": true, 00:12:54.174 "claim_type": "exclusive_write", 00:12:54.174 "zoned": false, 00:12:54.174 "supported_io_types": { 00:12:54.174 "read": true, 00:12:54.174 "write": true, 00:12:54.174 "unmap": true, 00:12:54.174 "flush": true, 00:12:54.174 "reset": true, 00:12:54.174 "nvme_admin": false, 00:12:54.174 "nvme_io": false, 00:12:54.174 "nvme_io_md": false, 00:12:54.174 "write_zeroes": true, 00:12:54.174 "zcopy": true, 00:12:54.174 "get_zone_info": false, 00:12:54.174 "zone_management": false, 00:12:54.174 "zone_append": false, 00:12:54.174 "compare": false, 00:12:54.174 "compare_and_write": false, 00:12:54.174 "abort": true, 00:12:54.174 "seek_hole": false, 00:12:54.174 "seek_data": false, 00:12:54.174 "copy": true, 00:12:54.174 "nvme_iov_md": false 00:12:54.174 }, 00:12:54.174 "memory_domains": [ 00:12:54.174 { 00:12:54.174 "dma_device_id": "system", 00:12:54.174 "dma_device_type": 1 00:12:54.174 }, 00:12:54.174 { 00:12:54.174 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.174 "dma_device_type": 2 00:12:54.174 } 00:12:54.174 ], 00:12:54.174 "driver_specific": {} 00:12:54.174 } 00:12:54.174 ] 00:12:54.174 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.174 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:54.174 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:12:54.174 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:12:54.174 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:12:54.174 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:54.174 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:12:54.174 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:54.174 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:54.174 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:54.174 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:54.174 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:54.174 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:54.174 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:54.174 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:54.174 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.174 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.174 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.174 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.174 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:54.174 "name": "Existed_Raid", 00:12:54.174 "uuid": "ce8b219f-751b-4b18-a753-cbb963260b30", 00:12:54.174 "strip_size_kb": 64, 00:12:54.174 "state": "online", 00:12:54.174 "raid_level": "concat", 00:12:54.174 "superblock": true, 00:12:54.174 "num_base_bdevs": 4, 00:12:54.174 "num_base_bdevs_discovered": 4, 00:12:54.174 "num_base_bdevs_operational": 4, 00:12:54.174 "base_bdevs_list": [ 00:12:54.174 { 00:12:54.175 "name": "BaseBdev1", 00:12:54.175 "uuid": "2da480a6-0ffe-4e3e-9607-d52bfc895e71", 00:12:54.175 "is_configured": true, 00:12:54.175 "data_offset": 2048, 00:12:54.175 "data_size": 63488 00:12:54.175 }, 00:12:54.175 { 00:12:54.175 "name": "BaseBdev2", 00:12:54.175 "uuid": "5d15106c-c303-4ccd-8ead-d8e4ab2ebdc1", 00:12:54.175 "is_configured": true, 00:12:54.175 "data_offset": 2048, 00:12:54.175 "data_size": 63488 00:12:54.175 }, 00:12:54.175 { 00:12:54.175 "name": "BaseBdev3", 00:12:54.175 "uuid": "ee606bfe-917a-449c-ac64-c94b4695d37a", 00:12:54.175 "is_configured": true, 00:12:54.175 "data_offset": 2048, 00:12:54.175 "data_size": 63488 00:12:54.175 }, 00:12:54.175 { 00:12:54.175 "name": "BaseBdev4", 00:12:54.175 "uuid": "87b5d491-3417-4005-85bd-04d157b81d68", 00:12:54.175 "is_configured": true, 00:12:54.175 "data_offset": 2048, 00:12:54.175 "data_size": 63488 00:12:54.175 } 00:12:54.175 ] 00:12:54.175 }' 00:12:54.175 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:54.175 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.742 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:12:54.742 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:12:54.742 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:12:54.742 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:12:54.742 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:12:54.742 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:12:54.742 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:12:54.742 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.742 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.742 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:12:54.742 [2024-11-27 14:12:31.770273] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:54.742 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.742 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:12:54.742 "name": "Existed_Raid", 00:12:54.742 "aliases": [ 00:12:54.742 "ce8b219f-751b-4b18-a753-cbb963260b30" 00:12:54.742 ], 00:12:54.742 "product_name": "Raid Volume", 00:12:54.742 "block_size": 512, 00:12:54.742 "num_blocks": 253952, 00:12:54.742 "uuid": "ce8b219f-751b-4b18-a753-cbb963260b30", 00:12:54.742 "assigned_rate_limits": { 00:12:54.742 "rw_ios_per_sec": 0, 00:12:54.742 "rw_mbytes_per_sec": 0, 00:12:54.742 "r_mbytes_per_sec": 0, 00:12:54.742 "w_mbytes_per_sec": 0 00:12:54.742 }, 00:12:54.742 "claimed": false, 00:12:54.742 "zoned": false, 00:12:54.742 "supported_io_types": { 00:12:54.742 "read": true, 00:12:54.742 "write": true, 00:12:54.742 "unmap": true, 00:12:54.742 "flush": true, 00:12:54.742 "reset": true, 00:12:54.742 "nvme_admin": false, 00:12:54.742 "nvme_io": false, 00:12:54.742 "nvme_io_md": false, 00:12:54.742 "write_zeroes": true, 00:12:54.742 "zcopy": false, 00:12:54.742 "get_zone_info": false, 00:12:54.742 "zone_management": false, 00:12:54.742 "zone_append": false, 00:12:54.742 "compare": false, 00:12:54.742 "compare_and_write": false, 00:12:54.742 "abort": false, 00:12:54.742 "seek_hole": false, 00:12:54.742 "seek_data": false, 00:12:54.742 "copy": false, 00:12:54.742 "nvme_iov_md": false 00:12:54.742 }, 00:12:54.742 "memory_domains": [ 00:12:54.742 { 00:12:54.742 "dma_device_id": "system", 00:12:54.742 "dma_device_type": 1 00:12:54.742 }, 00:12:54.742 { 00:12:54.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.742 "dma_device_type": 2 00:12:54.742 }, 00:12:54.742 { 00:12:54.742 "dma_device_id": "system", 00:12:54.742 "dma_device_type": 1 00:12:54.742 }, 00:12:54.742 { 00:12:54.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.742 "dma_device_type": 2 00:12:54.742 }, 00:12:54.742 { 00:12:54.742 "dma_device_id": "system", 00:12:54.742 "dma_device_type": 1 00:12:54.742 }, 00:12:54.742 { 00:12:54.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.742 "dma_device_type": 2 00:12:54.742 }, 00:12:54.742 { 00:12:54.742 "dma_device_id": "system", 00:12:54.742 "dma_device_type": 1 00:12:54.742 }, 00:12:54.742 { 00:12:54.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.742 "dma_device_type": 2 00:12:54.742 } 00:12:54.742 ], 00:12:54.742 "driver_specific": { 00:12:54.742 "raid": { 00:12:54.742 "uuid": "ce8b219f-751b-4b18-a753-cbb963260b30", 00:12:54.742 "strip_size_kb": 64, 00:12:54.742 "state": "online", 00:12:54.742 "raid_level": "concat", 00:12:54.742 "superblock": true, 00:12:54.742 "num_base_bdevs": 4, 00:12:54.742 "num_base_bdevs_discovered": 4, 00:12:54.742 "num_base_bdevs_operational": 4, 00:12:54.742 "base_bdevs_list": [ 00:12:54.742 { 00:12:54.742 "name": "BaseBdev1", 00:12:54.742 "uuid": "2da480a6-0ffe-4e3e-9607-d52bfc895e71", 00:12:54.742 "is_configured": true, 00:12:54.742 "data_offset": 2048, 00:12:54.742 "data_size": 63488 00:12:54.742 }, 00:12:54.742 { 00:12:54.742 "name": "BaseBdev2", 00:12:54.742 "uuid": "5d15106c-c303-4ccd-8ead-d8e4ab2ebdc1", 00:12:54.742 "is_configured": true, 00:12:54.742 "data_offset": 2048, 00:12:54.742 "data_size": 63488 00:12:54.742 }, 00:12:54.742 { 00:12:54.742 "name": "BaseBdev3", 00:12:54.742 "uuid": "ee606bfe-917a-449c-ac64-c94b4695d37a", 00:12:54.742 "is_configured": true, 00:12:54.742 "data_offset": 2048, 00:12:54.742 "data_size": 63488 00:12:54.742 }, 00:12:54.742 { 00:12:54.742 "name": "BaseBdev4", 00:12:54.742 "uuid": "87b5d491-3417-4005-85bd-04d157b81d68", 00:12:54.742 "is_configured": true, 00:12:54.742 "data_offset": 2048, 00:12:54.742 "data_size": 63488 00:12:54.742 } 00:12:54.742 ] 00:12:54.742 } 00:12:54.742 } 00:12:54.742 }' 00:12:54.742 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:54.742 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:12:54.742 BaseBdev2 00:12:54.742 BaseBdev3 00:12:54.742 BaseBdev4' 00:12:54.742 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.742 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:12:54.742 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:54.742 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:12:54.742 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.742 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.742 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.742 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.742 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:54.742 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:54.742 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:54.742 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:54.742 14:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:12:54.742 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.742 14:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:54.742 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.001 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.001 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.001 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.001 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:12:55.001 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.001 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.001 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.001 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.001 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.001 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.001 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:12:55.001 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:12:55.001 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.001 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.001 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:12:55.001 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.001 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:12:55.001 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:12:55.001 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:55.001 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.001 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.001 [2024-11-27 14:12:32.146002] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:55.001 [2024-11-27 14:12:32.146229] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:55.001 [2024-11-27 14:12:32.146414] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:55.001 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.001 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:12:55.001 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy concat 00:12:55.001 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:12:55.001 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # return 1 00:12:55.001 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@262 -- # expected_state=offline 00:12:55.001 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:12:55.001 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:55.001 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=offline 00:12:55.001 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:55.001 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:55.002 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:12:55.002 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:55.002 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:55.002 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:55.002 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:55.002 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.002 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.002 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.002 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.002 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.262 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:55.262 "name": "Existed_Raid", 00:12:55.262 "uuid": "ce8b219f-751b-4b18-a753-cbb963260b30", 00:12:55.262 "strip_size_kb": 64, 00:12:55.262 "state": "offline", 00:12:55.262 "raid_level": "concat", 00:12:55.262 "superblock": true, 00:12:55.262 "num_base_bdevs": 4, 00:12:55.262 "num_base_bdevs_discovered": 3, 00:12:55.262 "num_base_bdevs_operational": 3, 00:12:55.262 "base_bdevs_list": [ 00:12:55.262 { 00:12:55.262 "name": null, 00:12:55.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:55.262 "is_configured": false, 00:12:55.262 "data_offset": 0, 00:12:55.262 "data_size": 63488 00:12:55.262 }, 00:12:55.262 { 00:12:55.262 "name": "BaseBdev2", 00:12:55.262 "uuid": "5d15106c-c303-4ccd-8ead-d8e4ab2ebdc1", 00:12:55.262 "is_configured": true, 00:12:55.262 "data_offset": 2048, 00:12:55.262 "data_size": 63488 00:12:55.262 }, 00:12:55.262 { 00:12:55.262 "name": "BaseBdev3", 00:12:55.262 "uuid": "ee606bfe-917a-449c-ac64-c94b4695d37a", 00:12:55.262 "is_configured": true, 00:12:55.262 "data_offset": 2048, 00:12:55.262 "data_size": 63488 00:12:55.262 }, 00:12:55.262 { 00:12:55.262 "name": "BaseBdev4", 00:12:55.262 "uuid": "87b5d491-3417-4005-85bd-04d157b81d68", 00:12:55.262 "is_configured": true, 00:12:55.262 "data_offset": 2048, 00:12:55.262 "data_size": 63488 00:12:55.262 } 00:12:55.262 ] 00:12:55.262 }' 00:12:55.262 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:55.262 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.520 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:12:55.520 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:55.520 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.520 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:55.521 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.521 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.521 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.521 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:55.521 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:55.521 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:12:55.521 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.521 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.521 [2024-11-27 14:12:32.794966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:55.779 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.779 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:55.779 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:55.779 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:55.779 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.779 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.779 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:55.779 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:55.779 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:55.779 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:55.779 14:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:12:55.779 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:55.779 14:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:55.779 [2024-11-27 14:12:32.969041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.038 [2024-11-27 14:12:33.118926] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:12:56.038 [2024-11-27 14:12:33.119110] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.038 BaseBdev2 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.038 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.298 [ 00:12:56.298 { 00:12:56.298 "name": "BaseBdev2", 00:12:56.298 "aliases": [ 00:12:56.298 "4bcccc12-90d9-451c-a9a5-876532d167dc" 00:12:56.298 ], 00:12:56.298 "product_name": "Malloc disk", 00:12:56.298 "block_size": 512, 00:12:56.298 "num_blocks": 65536, 00:12:56.298 "uuid": "4bcccc12-90d9-451c-a9a5-876532d167dc", 00:12:56.298 "assigned_rate_limits": { 00:12:56.298 "rw_ios_per_sec": 0, 00:12:56.298 "rw_mbytes_per_sec": 0, 00:12:56.298 "r_mbytes_per_sec": 0, 00:12:56.298 "w_mbytes_per_sec": 0 00:12:56.298 }, 00:12:56.298 "claimed": false, 00:12:56.298 "zoned": false, 00:12:56.298 "supported_io_types": { 00:12:56.298 "read": true, 00:12:56.298 "write": true, 00:12:56.298 "unmap": true, 00:12:56.298 "flush": true, 00:12:56.298 "reset": true, 00:12:56.298 "nvme_admin": false, 00:12:56.298 "nvme_io": false, 00:12:56.298 "nvme_io_md": false, 00:12:56.298 "write_zeroes": true, 00:12:56.298 "zcopy": true, 00:12:56.298 "get_zone_info": false, 00:12:56.298 "zone_management": false, 00:12:56.298 "zone_append": false, 00:12:56.298 "compare": false, 00:12:56.298 "compare_and_write": false, 00:12:56.298 "abort": true, 00:12:56.298 "seek_hole": false, 00:12:56.298 "seek_data": false, 00:12:56.298 "copy": true, 00:12:56.298 "nvme_iov_md": false 00:12:56.298 }, 00:12:56.298 "memory_domains": [ 00:12:56.298 { 00:12:56.298 "dma_device_id": "system", 00:12:56.298 "dma_device_type": 1 00:12:56.298 }, 00:12:56.298 { 00:12:56.298 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.298 "dma_device_type": 2 00:12:56.298 } 00:12:56.298 ], 00:12:56.298 "driver_specific": {} 00:12:56.298 } 00:12:56.298 ] 00:12:56.298 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.298 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:56.298 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:56.298 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:56.298 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:12:56.298 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.298 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.298 BaseBdev3 00:12:56.298 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.298 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:12:56.298 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:12:56.298 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:56.298 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:56.298 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.299 [ 00:12:56.299 { 00:12:56.299 "name": "BaseBdev3", 00:12:56.299 "aliases": [ 00:12:56.299 "9fca63e3-ca45-41ef-b0ac-8cfb02afff40" 00:12:56.299 ], 00:12:56.299 "product_name": "Malloc disk", 00:12:56.299 "block_size": 512, 00:12:56.299 "num_blocks": 65536, 00:12:56.299 "uuid": "9fca63e3-ca45-41ef-b0ac-8cfb02afff40", 00:12:56.299 "assigned_rate_limits": { 00:12:56.299 "rw_ios_per_sec": 0, 00:12:56.299 "rw_mbytes_per_sec": 0, 00:12:56.299 "r_mbytes_per_sec": 0, 00:12:56.299 "w_mbytes_per_sec": 0 00:12:56.299 }, 00:12:56.299 "claimed": false, 00:12:56.299 "zoned": false, 00:12:56.299 "supported_io_types": { 00:12:56.299 "read": true, 00:12:56.299 "write": true, 00:12:56.299 "unmap": true, 00:12:56.299 "flush": true, 00:12:56.299 "reset": true, 00:12:56.299 "nvme_admin": false, 00:12:56.299 "nvme_io": false, 00:12:56.299 "nvme_io_md": false, 00:12:56.299 "write_zeroes": true, 00:12:56.299 "zcopy": true, 00:12:56.299 "get_zone_info": false, 00:12:56.299 "zone_management": false, 00:12:56.299 "zone_append": false, 00:12:56.299 "compare": false, 00:12:56.299 "compare_and_write": false, 00:12:56.299 "abort": true, 00:12:56.299 "seek_hole": false, 00:12:56.299 "seek_data": false, 00:12:56.299 "copy": true, 00:12:56.299 "nvme_iov_md": false 00:12:56.299 }, 00:12:56.299 "memory_domains": [ 00:12:56.299 { 00:12:56.299 "dma_device_id": "system", 00:12:56.299 "dma_device_type": 1 00:12:56.299 }, 00:12:56.299 { 00:12:56.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.299 "dma_device_type": 2 00:12:56.299 } 00:12:56.299 ], 00:12:56.299 "driver_specific": {} 00:12:56.299 } 00:12:56.299 ] 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.299 BaseBdev4 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.299 [ 00:12:56.299 { 00:12:56.299 "name": "BaseBdev4", 00:12:56.299 "aliases": [ 00:12:56.299 "c939cdcb-6b26-461f-a9f2-3d54bb08b8ec" 00:12:56.299 ], 00:12:56.299 "product_name": "Malloc disk", 00:12:56.299 "block_size": 512, 00:12:56.299 "num_blocks": 65536, 00:12:56.299 "uuid": "c939cdcb-6b26-461f-a9f2-3d54bb08b8ec", 00:12:56.299 "assigned_rate_limits": { 00:12:56.299 "rw_ios_per_sec": 0, 00:12:56.299 "rw_mbytes_per_sec": 0, 00:12:56.299 "r_mbytes_per_sec": 0, 00:12:56.299 "w_mbytes_per_sec": 0 00:12:56.299 }, 00:12:56.299 "claimed": false, 00:12:56.299 "zoned": false, 00:12:56.299 "supported_io_types": { 00:12:56.299 "read": true, 00:12:56.299 "write": true, 00:12:56.299 "unmap": true, 00:12:56.299 "flush": true, 00:12:56.299 "reset": true, 00:12:56.299 "nvme_admin": false, 00:12:56.299 "nvme_io": false, 00:12:56.299 "nvme_io_md": false, 00:12:56.299 "write_zeroes": true, 00:12:56.299 "zcopy": true, 00:12:56.299 "get_zone_info": false, 00:12:56.299 "zone_management": false, 00:12:56.299 "zone_append": false, 00:12:56.299 "compare": false, 00:12:56.299 "compare_and_write": false, 00:12:56.299 "abort": true, 00:12:56.299 "seek_hole": false, 00:12:56.299 "seek_data": false, 00:12:56.299 "copy": true, 00:12:56.299 "nvme_iov_md": false 00:12:56.299 }, 00:12:56.299 "memory_domains": [ 00:12:56.299 { 00:12:56.299 "dma_device_id": "system", 00:12:56.299 "dma_device_type": 1 00:12:56.299 }, 00:12:56.299 { 00:12:56.299 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:56.299 "dma_device_type": 2 00:12:56.299 } 00:12:56.299 ], 00:12:56.299 "driver_specific": {} 00:12:56.299 } 00:12:56.299 ] 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.299 [2024-11-27 14:12:33.482354] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:56.299 [2024-11-27 14:12:33.482572] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:56.299 [2024-11-27 14:12:33.482737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:56.299 [2024-11-27 14:12:33.485453] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:56.299 [2024-11-27 14:12:33.485650] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.299 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.299 "name": "Existed_Raid", 00:12:56.299 "uuid": "eb5ce83b-b26c-47f3-9e63-76aa8040319c", 00:12:56.299 "strip_size_kb": 64, 00:12:56.299 "state": "configuring", 00:12:56.299 "raid_level": "concat", 00:12:56.299 "superblock": true, 00:12:56.299 "num_base_bdevs": 4, 00:12:56.299 "num_base_bdevs_discovered": 3, 00:12:56.299 "num_base_bdevs_operational": 4, 00:12:56.299 "base_bdevs_list": [ 00:12:56.299 { 00:12:56.299 "name": "BaseBdev1", 00:12:56.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.299 "is_configured": false, 00:12:56.299 "data_offset": 0, 00:12:56.299 "data_size": 0 00:12:56.299 }, 00:12:56.299 { 00:12:56.299 "name": "BaseBdev2", 00:12:56.299 "uuid": "4bcccc12-90d9-451c-a9a5-876532d167dc", 00:12:56.299 "is_configured": true, 00:12:56.299 "data_offset": 2048, 00:12:56.299 "data_size": 63488 00:12:56.299 }, 00:12:56.299 { 00:12:56.299 "name": "BaseBdev3", 00:12:56.299 "uuid": "9fca63e3-ca45-41ef-b0ac-8cfb02afff40", 00:12:56.299 "is_configured": true, 00:12:56.300 "data_offset": 2048, 00:12:56.300 "data_size": 63488 00:12:56.300 }, 00:12:56.300 { 00:12:56.300 "name": "BaseBdev4", 00:12:56.300 "uuid": "c939cdcb-6b26-461f-a9f2-3d54bb08b8ec", 00:12:56.300 "is_configured": true, 00:12:56.300 "data_offset": 2048, 00:12:56.300 "data_size": 63488 00:12:56.300 } 00:12:56.300 ] 00:12:56.300 }' 00:12:56.300 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.300 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.914 14:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:12:56.914 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.914 14:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.914 [2024-11-27 14:12:33.998540] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:56.914 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.914 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:56.914 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:56.914 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:56.914 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:56.914 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:56.914 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:56.915 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:56.915 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:56.915 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:56.915 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:56.915 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:56.915 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:56.915 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:56.915 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:56.915 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:56.915 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:56.915 "name": "Existed_Raid", 00:12:56.915 "uuid": "eb5ce83b-b26c-47f3-9e63-76aa8040319c", 00:12:56.915 "strip_size_kb": 64, 00:12:56.915 "state": "configuring", 00:12:56.915 "raid_level": "concat", 00:12:56.915 "superblock": true, 00:12:56.915 "num_base_bdevs": 4, 00:12:56.915 "num_base_bdevs_discovered": 2, 00:12:56.915 "num_base_bdevs_operational": 4, 00:12:56.915 "base_bdevs_list": [ 00:12:56.915 { 00:12:56.915 "name": "BaseBdev1", 00:12:56.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.915 "is_configured": false, 00:12:56.915 "data_offset": 0, 00:12:56.915 "data_size": 0 00:12:56.915 }, 00:12:56.915 { 00:12:56.915 "name": null, 00:12:56.915 "uuid": "4bcccc12-90d9-451c-a9a5-876532d167dc", 00:12:56.915 "is_configured": false, 00:12:56.915 "data_offset": 0, 00:12:56.915 "data_size": 63488 00:12:56.915 }, 00:12:56.915 { 00:12:56.915 "name": "BaseBdev3", 00:12:56.915 "uuid": "9fca63e3-ca45-41ef-b0ac-8cfb02afff40", 00:12:56.915 "is_configured": true, 00:12:56.915 "data_offset": 2048, 00:12:56.915 "data_size": 63488 00:12:56.915 }, 00:12:56.915 { 00:12:56.915 "name": "BaseBdev4", 00:12:56.915 "uuid": "c939cdcb-6b26-461f-a9f2-3d54bb08b8ec", 00:12:56.915 "is_configured": true, 00:12:56.915 "data_offset": 2048, 00:12:56.915 "data_size": 63488 00:12:56.915 } 00:12:56.915 ] 00:12:56.915 }' 00:12:56.915 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:56.915 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.483 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.483 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:57.483 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.483 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.483 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.483 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:12:57.483 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:12:57.483 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.483 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.483 [2024-11-27 14:12:34.593164] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:57.483 BaseBdev1 00:12:57.483 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.483 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:12:57.483 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:12:57.483 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:12:57.483 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:12:57.483 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:12:57.483 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:12:57.483 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:12:57.483 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.483 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.483 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.483 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:57.483 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.483 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.483 [ 00:12:57.483 { 00:12:57.483 "name": "BaseBdev1", 00:12:57.483 "aliases": [ 00:12:57.483 "3eced52f-6589-40b2-aa41-11a3b27962fe" 00:12:57.483 ], 00:12:57.483 "product_name": "Malloc disk", 00:12:57.483 "block_size": 512, 00:12:57.483 "num_blocks": 65536, 00:12:57.483 "uuid": "3eced52f-6589-40b2-aa41-11a3b27962fe", 00:12:57.483 "assigned_rate_limits": { 00:12:57.483 "rw_ios_per_sec": 0, 00:12:57.483 "rw_mbytes_per_sec": 0, 00:12:57.483 "r_mbytes_per_sec": 0, 00:12:57.483 "w_mbytes_per_sec": 0 00:12:57.483 }, 00:12:57.483 "claimed": true, 00:12:57.483 "claim_type": "exclusive_write", 00:12:57.483 "zoned": false, 00:12:57.483 "supported_io_types": { 00:12:57.483 "read": true, 00:12:57.483 "write": true, 00:12:57.483 "unmap": true, 00:12:57.483 "flush": true, 00:12:57.483 "reset": true, 00:12:57.483 "nvme_admin": false, 00:12:57.483 "nvme_io": false, 00:12:57.483 "nvme_io_md": false, 00:12:57.483 "write_zeroes": true, 00:12:57.483 "zcopy": true, 00:12:57.483 "get_zone_info": false, 00:12:57.483 "zone_management": false, 00:12:57.483 "zone_append": false, 00:12:57.483 "compare": false, 00:12:57.483 "compare_and_write": false, 00:12:57.483 "abort": true, 00:12:57.483 "seek_hole": false, 00:12:57.483 "seek_data": false, 00:12:57.483 "copy": true, 00:12:57.483 "nvme_iov_md": false 00:12:57.483 }, 00:12:57.483 "memory_domains": [ 00:12:57.483 { 00:12:57.483 "dma_device_id": "system", 00:12:57.483 "dma_device_type": 1 00:12:57.483 }, 00:12:57.483 { 00:12:57.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.483 "dma_device_type": 2 00:12:57.483 } 00:12:57.483 ], 00:12:57.483 "driver_specific": {} 00:12:57.483 } 00:12:57.483 ] 00:12:57.484 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.484 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:12:57.484 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:57.484 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:57.484 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:57.484 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:57.484 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:57.484 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:57.484 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:57.484 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:57.484 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:57.484 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:57.484 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:57.484 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.484 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:57.484 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:57.484 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:57.484 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:57.484 "name": "Existed_Raid", 00:12:57.484 "uuid": "eb5ce83b-b26c-47f3-9e63-76aa8040319c", 00:12:57.484 "strip_size_kb": 64, 00:12:57.484 "state": "configuring", 00:12:57.484 "raid_level": "concat", 00:12:57.484 "superblock": true, 00:12:57.484 "num_base_bdevs": 4, 00:12:57.484 "num_base_bdevs_discovered": 3, 00:12:57.484 "num_base_bdevs_operational": 4, 00:12:57.484 "base_bdevs_list": [ 00:12:57.484 { 00:12:57.484 "name": "BaseBdev1", 00:12:57.484 "uuid": "3eced52f-6589-40b2-aa41-11a3b27962fe", 00:12:57.484 "is_configured": true, 00:12:57.484 "data_offset": 2048, 00:12:57.484 "data_size": 63488 00:12:57.484 }, 00:12:57.484 { 00:12:57.484 "name": null, 00:12:57.484 "uuid": "4bcccc12-90d9-451c-a9a5-876532d167dc", 00:12:57.484 "is_configured": false, 00:12:57.484 "data_offset": 0, 00:12:57.484 "data_size": 63488 00:12:57.484 }, 00:12:57.484 { 00:12:57.484 "name": "BaseBdev3", 00:12:57.484 "uuid": "9fca63e3-ca45-41ef-b0ac-8cfb02afff40", 00:12:57.484 "is_configured": true, 00:12:57.484 "data_offset": 2048, 00:12:57.484 "data_size": 63488 00:12:57.484 }, 00:12:57.484 { 00:12:57.484 "name": "BaseBdev4", 00:12:57.484 "uuid": "c939cdcb-6b26-461f-a9f2-3d54bb08b8ec", 00:12:57.484 "is_configured": true, 00:12:57.484 "data_offset": 2048, 00:12:57.484 "data_size": 63488 00:12:57.484 } 00:12:57.484 ] 00:12:57.484 }' 00:12:57.484 14:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:57.484 14:12:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.053 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.053 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.053 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.053 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:58.053 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.053 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:12:58.053 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:12:58.053 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.053 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.053 [2024-11-27 14:12:35.221579] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:58.053 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.053 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:58.053 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:58.053 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:58.053 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:58.053 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.053 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:58.053 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.053 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.053 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.053 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.053 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.053 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.053 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.053 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.053 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.053 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.053 "name": "Existed_Raid", 00:12:58.053 "uuid": "eb5ce83b-b26c-47f3-9e63-76aa8040319c", 00:12:58.053 "strip_size_kb": 64, 00:12:58.053 "state": "configuring", 00:12:58.053 "raid_level": "concat", 00:12:58.053 "superblock": true, 00:12:58.053 "num_base_bdevs": 4, 00:12:58.053 "num_base_bdevs_discovered": 2, 00:12:58.053 "num_base_bdevs_operational": 4, 00:12:58.053 "base_bdevs_list": [ 00:12:58.053 { 00:12:58.053 "name": "BaseBdev1", 00:12:58.053 "uuid": "3eced52f-6589-40b2-aa41-11a3b27962fe", 00:12:58.053 "is_configured": true, 00:12:58.053 "data_offset": 2048, 00:12:58.053 "data_size": 63488 00:12:58.053 }, 00:12:58.053 { 00:12:58.053 "name": null, 00:12:58.053 "uuid": "4bcccc12-90d9-451c-a9a5-876532d167dc", 00:12:58.053 "is_configured": false, 00:12:58.053 "data_offset": 0, 00:12:58.053 "data_size": 63488 00:12:58.053 }, 00:12:58.053 { 00:12:58.053 "name": null, 00:12:58.053 "uuid": "9fca63e3-ca45-41ef-b0ac-8cfb02afff40", 00:12:58.053 "is_configured": false, 00:12:58.053 "data_offset": 0, 00:12:58.053 "data_size": 63488 00:12:58.053 }, 00:12:58.053 { 00:12:58.053 "name": "BaseBdev4", 00:12:58.053 "uuid": "c939cdcb-6b26-461f-a9f2-3d54bb08b8ec", 00:12:58.053 "is_configured": true, 00:12:58.053 "data_offset": 2048, 00:12:58.053 "data_size": 63488 00:12:58.053 } 00:12:58.053 ] 00:12:58.053 }' 00:12:58.053 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.053 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.621 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.621 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:58.621 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.621 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.621 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.621 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:12:58.621 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:58.621 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.621 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.621 [2024-11-27 14:12:35.833818] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:58.621 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.621 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:58.621 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:58.621 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:58.621 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:58.621 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:58.621 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:58.621 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:58.621 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:58.621 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:58.621 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:58.621 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:58.621 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:58.621 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:58.621 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:58.621 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:58.621 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:58.621 "name": "Existed_Raid", 00:12:58.621 "uuid": "eb5ce83b-b26c-47f3-9e63-76aa8040319c", 00:12:58.621 "strip_size_kb": 64, 00:12:58.621 "state": "configuring", 00:12:58.621 "raid_level": "concat", 00:12:58.621 "superblock": true, 00:12:58.621 "num_base_bdevs": 4, 00:12:58.621 "num_base_bdevs_discovered": 3, 00:12:58.621 "num_base_bdevs_operational": 4, 00:12:58.621 "base_bdevs_list": [ 00:12:58.621 { 00:12:58.621 "name": "BaseBdev1", 00:12:58.621 "uuid": "3eced52f-6589-40b2-aa41-11a3b27962fe", 00:12:58.621 "is_configured": true, 00:12:58.621 "data_offset": 2048, 00:12:58.621 "data_size": 63488 00:12:58.621 }, 00:12:58.621 { 00:12:58.621 "name": null, 00:12:58.621 "uuid": "4bcccc12-90d9-451c-a9a5-876532d167dc", 00:12:58.621 "is_configured": false, 00:12:58.621 "data_offset": 0, 00:12:58.621 "data_size": 63488 00:12:58.621 }, 00:12:58.621 { 00:12:58.621 "name": "BaseBdev3", 00:12:58.621 "uuid": "9fca63e3-ca45-41ef-b0ac-8cfb02afff40", 00:12:58.621 "is_configured": true, 00:12:58.621 "data_offset": 2048, 00:12:58.621 "data_size": 63488 00:12:58.621 }, 00:12:58.621 { 00:12:58.621 "name": "BaseBdev4", 00:12:58.621 "uuid": "c939cdcb-6b26-461f-a9f2-3d54bb08b8ec", 00:12:58.622 "is_configured": true, 00:12:58.622 "data_offset": 2048, 00:12:58.622 "data_size": 63488 00:12:58.622 } 00:12:58.622 ] 00:12:58.622 }' 00:12:58.622 14:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:58.622 14:12:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.190 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:59.190 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.190 14:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.190 14:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.190 14:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.190 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:12:59.190 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:12:59.190 14:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.190 14:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.190 [2024-11-27 14:12:36.446099] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:59.450 14:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.450 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:12:59.450 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:12:59.450 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:12:59.450 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:12:59.450 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:12:59.450 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:12:59.450 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:12:59.450 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:12:59.450 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:12:59.450 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:12:59.450 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:12:59.450 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.450 14:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.450 14:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:59.450 14:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.450 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:12:59.450 "name": "Existed_Raid", 00:12:59.450 "uuid": "eb5ce83b-b26c-47f3-9e63-76aa8040319c", 00:12:59.450 "strip_size_kb": 64, 00:12:59.450 "state": "configuring", 00:12:59.450 "raid_level": "concat", 00:12:59.450 "superblock": true, 00:12:59.450 "num_base_bdevs": 4, 00:12:59.450 "num_base_bdevs_discovered": 2, 00:12:59.450 "num_base_bdevs_operational": 4, 00:12:59.450 "base_bdevs_list": [ 00:12:59.450 { 00:12:59.450 "name": null, 00:12:59.450 "uuid": "3eced52f-6589-40b2-aa41-11a3b27962fe", 00:12:59.450 "is_configured": false, 00:12:59.450 "data_offset": 0, 00:12:59.450 "data_size": 63488 00:12:59.450 }, 00:12:59.450 { 00:12:59.450 "name": null, 00:12:59.450 "uuid": "4bcccc12-90d9-451c-a9a5-876532d167dc", 00:12:59.450 "is_configured": false, 00:12:59.450 "data_offset": 0, 00:12:59.450 "data_size": 63488 00:12:59.450 }, 00:12:59.450 { 00:12:59.450 "name": "BaseBdev3", 00:12:59.450 "uuid": "9fca63e3-ca45-41ef-b0ac-8cfb02afff40", 00:12:59.450 "is_configured": true, 00:12:59.450 "data_offset": 2048, 00:12:59.450 "data_size": 63488 00:12:59.450 }, 00:12:59.450 { 00:12:59.450 "name": "BaseBdev4", 00:12:59.450 "uuid": "c939cdcb-6b26-461f-a9f2-3d54bb08b8ec", 00:12:59.450 "is_configured": true, 00:12:59.450 "data_offset": 2048, 00:12:59.450 "data_size": 63488 00:12:59.450 } 00:12:59.450 ] 00:12:59.450 }' 00:12:59.450 14:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:12:59.450 14:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.018 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.018 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.018 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.018 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:00.018 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.018 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:00.018 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:00.018 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.018 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.018 [2024-11-27 14:12:37.091977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:00.018 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.018 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:13:00.018 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.018 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:00.018 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:00.018 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.018 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:00.018 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.018 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.018 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.018 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.018 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.018 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.018 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.018 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.018 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.018 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.018 "name": "Existed_Raid", 00:13:00.018 "uuid": "eb5ce83b-b26c-47f3-9e63-76aa8040319c", 00:13:00.018 "strip_size_kb": 64, 00:13:00.018 "state": "configuring", 00:13:00.018 "raid_level": "concat", 00:13:00.018 "superblock": true, 00:13:00.018 "num_base_bdevs": 4, 00:13:00.018 "num_base_bdevs_discovered": 3, 00:13:00.018 "num_base_bdevs_operational": 4, 00:13:00.018 "base_bdevs_list": [ 00:13:00.018 { 00:13:00.018 "name": null, 00:13:00.018 "uuid": "3eced52f-6589-40b2-aa41-11a3b27962fe", 00:13:00.018 "is_configured": false, 00:13:00.018 "data_offset": 0, 00:13:00.018 "data_size": 63488 00:13:00.018 }, 00:13:00.018 { 00:13:00.018 "name": "BaseBdev2", 00:13:00.018 "uuid": "4bcccc12-90d9-451c-a9a5-876532d167dc", 00:13:00.018 "is_configured": true, 00:13:00.018 "data_offset": 2048, 00:13:00.018 "data_size": 63488 00:13:00.018 }, 00:13:00.018 { 00:13:00.018 "name": "BaseBdev3", 00:13:00.018 "uuid": "9fca63e3-ca45-41ef-b0ac-8cfb02afff40", 00:13:00.018 "is_configured": true, 00:13:00.018 "data_offset": 2048, 00:13:00.018 "data_size": 63488 00:13:00.018 }, 00:13:00.018 { 00:13:00.018 "name": "BaseBdev4", 00:13:00.018 "uuid": "c939cdcb-6b26-461f-a9f2-3d54bb08b8ec", 00:13:00.018 "is_configured": true, 00:13:00.018 "data_offset": 2048, 00:13:00.018 "data_size": 63488 00:13:00.018 } 00:13:00.018 ] 00:13:00.018 }' 00:13:00.018 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.018 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.585 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.585 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:00.585 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.585 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.585 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.585 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 3eced52f-6589-40b2-aa41-11a3b27962fe 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.586 [2024-11-27 14:12:37.746166] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:00.586 [2024-11-27 14:12:37.746493] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:00.586 [2024-11-27 14:12:37.746513] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:00.586 [2024-11-27 14:12:37.746892] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:00.586 NewBaseBdev 00:13:00.586 [2024-11-27 14:12:37.747067] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:00.586 [2024-11-27 14:12:37.747088] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:00.586 [2024-11-27 14:12:37.747245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.586 [ 00:13:00.586 { 00:13:00.586 "name": "NewBaseBdev", 00:13:00.586 "aliases": [ 00:13:00.586 "3eced52f-6589-40b2-aa41-11a3b27962fe" 00:13:00.586 ], 00:13:00.586 "product_name": "Malloc disk", 00:13:00.586 "block_size": 512, 00:13:00.586 "num_blocks": 65536, 00:13:00.586 "uuid": "3eced52f-6589-40b2-aa41-11a3b27962fe", 00:13:00.586 "assigned_rate_limits": { 00:13:00.586 "rw_ios_per_sec": 0, 00:13:00.586 "rw_mbytes_per_sec": 0, 00:13:00.586 "r_mbytes_per_sec": 0, 00:13:00.586 "w_mbytes_per_sec": 0 00:13:00.586 }, 00:13:00.586 "claimed": true, 00:13:00.586 "claim_type": "exclusive_write", 00:13:00.586 "zoned": false, 00:13:00.586 "supported_io_types": { 00:13:00.586 "read": true, 00:13:00.586 "write": true, 00:13:00.586 "unmap": true, 00:13:00.586 "flush": true, 00:13:00.586 "reset": true, 00:13:00.586 "nvme_admin": false, 00:13:00.586 "nvme_io": false, 00:13:00.586 "nvme_io_md": false, 00:13:00.586 "write_zeroes": true, 00:13:00.586 "zcopy": true, 00:13:00.586 "get_zone_info": false, 00:13:00.586 "zone_management": false, 00:13:00.586 "zone_append": false, 00:13:00.586 "compare": false, 00:13:00.586 "compare_and_write": false, 00:13:00.586 "abort": true, 00:13:00.586 "seek_hole": false, 00:13:00.586 "seek_data": false, 00:13:00.586 "copy": true, 00:13:00.586 "nvme_iov_md": false 00:13:00.586 }, 00:13:00.586 "memory_domains": [ 00:13:00.586 { 00:13:00.586 "dma_device_id": "system", 00:13:00.586 "dma_device_type": 1 00:13:00.586 }, 00:13:00.586 { 00:13:00.586 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:00.586 "dma_device_type": 2 00:13:00.586 } 00:13:00.586 ], 00:13:00.586 "driver_specific": {} 00:13:00.586 } 00:13:00.586 ] 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:00.586 "name": "Existed_Raid", 00:13:00.586 "uuid": "eb5ce83b-b26c-47f3-9e63-76aa8040319c", 00:13:00.586 "strip_size_kb": 64, 00:13:00.586 "state": "online", 00:13:00.586 "raid_level": "concat", 00:13:00.586 "superblock": true, 00:13:00.586 "num_base_bdevs": 4, 00:13:00.586 "num_base_bdevs_discovered": 4, 00:13:00.586 "num_base_bdevs_operational": 4, 00:13:00.586 "base_bdevs_list": [ 00:13:00.586 { 00:13:00.586 "name": "NewBaseBdev", 00:13:00.586 "uuid": "3eced52f-6589-40b2-aa41-11a3b27962fe", 00:13:00.586 "is_configured": true, 00:13:00.586 "data_offset": 2048, 00:13:00.586 "data_size": 63488 00:13:00.586 }, 00:13:00.586 { 00:13:00.586 "name": "BaseBdev2", 00:13:00.586 "uuid": "4bcccc12-90d9-451c-a9a5-876532d167dc", 00:13:00.586 "is_configured": true, 00:13:00.586 "data_offset": 2048, 00:13:00.586 "data_size": 63488 00:13:00.586 }, 00:13:00.586 { 00:13:00.586 "name": "BaseBdev3", 00:13:00.586 "uuid": "9fca63e3-ca45-41ef-b0ac-8cfb02afff40", 00:13:00.586 "is_configured": true, 00:13:00.586 "data_offset": 2048, 00:13:00.586 "data_size": 63488 00:13:00.586 }, 00:13:00.586 { 00:13:00.586 "name": "BaseBdev4", 00:13:00.586 "uuid": "c939cdcb-6b26-461f-a9f2-3d54bb08b8ec", 00:13:00.586 "is_configured": true, 00:13:00.586 "data_offset": 2048, 00:13:00.586 "data_size": 63488 00:13:00.586 } 00:13:00.586 ] 00:13:00.586 }' 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:00.586 14:12:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.154 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:01.154 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:01.154 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:01.154 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:01.154 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:01.154 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:01.154 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:01.154 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.154 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:01.154 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.154 [2024-11-27 14:12:38.322934] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:01.154 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.154 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:01.154 "name": "Existed_Raid", 00:13:01.154 "aliases": [ 00:13:01.154 "eb5ce83b-b26c-47f3-9e63-76aa8040319c" 00:13:01.154 ], 00:13:01.154 "product_name": "Raid Volume", 00:13:01.154 "block_size": 512, 00:13:01.154 "num_blocks": 253952, 00:13:01.154 "uuid": "eb5ce83b-b26c-47f3-9e63-76aa8040319c", 00:13:01.154 "assigned_rate_limits": { 00:13:01.154 "rw_ios_per_sec": 0, 00:13:01.154 "rw_mbytes_per_sec": 0, 00:13:01.154 "r_mbytes_per_sec": 0, 00:13:01.154 "w_mbytes_per_sec": 0 00:13:01.154 }, 00:13:01.154 "claimed": false, 00:13:01.154 "zoned": false, 00:13:01.154 "supported_io_types": { 00:13:01.154 "read": true, 00:13:01.154 "write": true, 00:13:01.154 "unmap": true, 00:13:01.154 "flush": true, 00:13:01.154 "reset": true, 00:13:01.154 "nvme_admin": false, 00:13:01.154 "nvme_io": false, 00:13:01.154 "nvme_io_md": false, 00:13:01.154 "write_zeroes": true, 00:13:01.154 "zcopy": false, 00:13:01.154 "get_zone_info": false, 00:13:01.154 "zone_management": false, 00:13:01.154 "zone_append": false, 00:13:01.154 "compare": false, 00:13:01.154 "compare_and_write": false, 00:13:01.154 "abort": false, 00:13:01.155 "seek_hole": false, 00:13:01.155 "seek_data": false, 00:13:01.155 "copy": false, 00:13:01.155 "nvme_iov_md": false 00:13:01.155 }, 00:13:01.155 "memory_domains": [ 00:13:01.155 { 00:13:01.155 "dma_device_id": "system", 00:13:01.155 "dma_device_type": 1 00:13:01.155 }, 00:13:01.155 { 00:13:01.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.155 "dma_device_type": 2 00:13:01.155 }, 00:13:01.155 { 00:13:01.155 "dma_device_id": "system", 00:13:01.155 "dma_device_type": 1 00:13:01.155 }, 00:13:01.155 { 00:13:01.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.155 "dma_device_type": 2 00:13:01.155 }, 00:13:01.155 { 00:13:01.155 "dma_device_id": "system", 00:13:01.155 "dma_device_type": 1 00:13:01.155 }, 00:13:01.155 { 00:13:01.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.155 "dma_device_type": 2 00:13:01.155 }, 00:13:01.155 { 00:13:01.155 "dma_device_id": "system", 00:13:01.155 "dma_device_type": 1 00:13:01.155 }, 00:13:01.155 { 00:13:01.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.155 "dma_device_type": 2 00:13:01.155 } 00:13:01.155 ], 00:13:01.155 "driver_specific": { 00:13:01.155 "raid": { 00:13:01.155 "uuid": "eb5ce83b-b26c-47f3-9e63-76aa8040319c", 00:13:01.155 "strip_size_kb": 64, 00:13:01.155 "state": "online", 00:13:01.155 "raid_level": "concat", 00:13:01.155 "superblock": true, 00:13:01.155 "num_base_bdevs": 4, 00:13:01.155 "num_base_bdevs_discovered": 4, 00:13:01.155 "num_base_bdevs_operational": 4, 00:13:01.155 "base_bdevs_list": [ 00:13:01.155 { 00:13:01.155 "name": "NewBaseBdev", 00:13:01.155 "uuid": "3eced52f-6589-40b2-aa41-11a3b27962fe", 00:13:01.155 "is_configured": true, 00:13:01.155 "data_offset": 2048, 00:13:01.155 "data_size": 63488 00:13:01.155 }, 00:13:01.155 { 00:13:01.155 "name": "BaseBdev2", 00:13:01.155 "uuid": "4bcccc12-90d9-451c-a9a5-876532d167dc", 00:13:01.155 "is_configured": true, 00:13:01.155 "data_offset": 2048, 00:13:01.155 "data_size": 63488 00:13:01.155 }, 00:13:01.155 { 00:13:01.155 "name": "BaseBdev3", 00:13:01.155 "uuid": "9fca63e3-ca45-41ef-b0ac-8cfb02afff40", 00:13:01.155 "is_configured": true, 00:13:01.155 "data_offset": 2048, 00:13:01.155 "data_size": 63488 00:13:01.155 }, 00:13:01.155 { 00:13:01.155 "name": "BaseBdev4", 00:13:01.155 "uuid": "c939cdcb-6b26-461f-a9f2-3d54bb08b8ec", 00:13:01.155 "is_configured": true, 00:13:01.155 "data_offset": 2048, 00:13:01.155 "data_size": 63488 00:13:01.155 } 00:13:01.155 ] 00:13:01.155 } 00:13:01.155 } 00:13:01.155 }' 00:13:01.155 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:01.155 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:01.155 BaseBdev2 00:13:01.155 BaseBdev3 00:13:01.155 BaseBdev4' 00:13:01.155 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.415 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:01.415 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.415 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:01.415 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.415 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.415 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.415 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.415 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.415 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.415 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.415 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:01.415 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.415 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.415 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.415 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.415 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.415 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.415 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.415 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:01.415 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.415 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.415 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.415 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.415 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.415 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.415 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:01.415 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:01.415 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:01.415 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.415 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.415 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.674 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:01.674 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:01.674 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:01.674 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:01.674 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:01.674 [2024-11-27 14:12:38.698558] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:01.674 [2024-11-27 14:12:38.698751] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:01.674 [2024-11-27 14:12:38.698987] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:01.674 [2024-11-27 14:12:38.699095] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:01.674 [2024-11-27 14:12:38.699128] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:01.674 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:01.674 14:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 71984 00:13:01.674 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 71984 ']' 00:13:01.674 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 71984 00:13:01.674 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:01.674 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:01.674 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71984 00:13:01.674 killing process with pid 71984 00:13:01.674 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:01.674 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:01.674 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71984' 00:13:01.674 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 71984 00:13:01.674 [2024-11-27 14:12:38.742681] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:01.674 14:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 71984 00:13:01.932 [2024-11-27 14:12:39.084146] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:02.868 14:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:02.868 00:13:02.868 real 0m13.036s 00:13:02.868 user 0m21.671s 00:13:02.869 sys 0m1.846s 00:13:02.869 14:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:02.869 ************************************ 00:13:02.869 END TEST raid_state_function_test_sb 00:13:02.869 ************************************ 00:13:02.869 14:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:03.129 14:12:40 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:13:03.129 14:12:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:03.129 14:12:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:03.129 14:12:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:03.129 ************************************ 00:13:03.129 START TEST raid_superblock_test 00:13:03.129 ************************************ 00:13:03.129 14:12:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test concat 4 00:13:03.129 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=concat 00:13:03.129 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:03.129 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:03.129 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:03.129 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:03.129 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:03.129 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:03.129 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:03.129 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:03.129 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:03.129 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:03.129 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:03.129 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:03.129 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' concat '!=' raid1 ']' 00:13:03.129 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:13:03.129 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:13:03.129 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=72661 00:13:03.129 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 72661 00:13:03.129 14:12:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:03.129 14:12:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 72661 ']' 00:13:03.129 14:12:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.129 14:12:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:03.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.129 14:12:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.129 14:12:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:03.129 14:12:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:03.129 [2024-11-27 14:12:40.268846] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:13:03.129 [2024-11-27 14:12:40.269027] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72661 ] 00:13:03.388 [2024-11-27 14:12:40.451934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:03.388 [2024-11-27 14:12:40.581026] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.647 [2024-11-27 14:12:40.781971] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:03.647 [2024-11-27 14:12:40.782046] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:04.215 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:04.215 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:04.215 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:04.215 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:04.215 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:04.215 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:04.215 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:04.215 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:04.215 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:04.215 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:04.215 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:04.215 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.215 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.215 malloc1 00:13:04.215 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.215 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:04.215 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.215 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.215 [2024-11-27 14:12:41.418523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:04.215 [2024-11-27 14:12:41.418632] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.215 [2024-11-27 14:12:41.418667] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:04.215 [2024-11-27 14:12:41.418683] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.215 [2024-11-27 14:12:41.421484] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.215 [2024-11-27 14:12:41.421529] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:04.215 pt1 00:13:04.216 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.216 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:04.216 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:04.216 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:04.216 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:04.216 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:04.216 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:04.216 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:04.216 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:04.216 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:04.216 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.216 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.216 malloc2 00:13:04.216 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.216 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:04.216 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.216 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.216 [2024-11-27 14:12:41.473910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:04.216 [2024-11-27 14:12:41.473991] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.216 [2024-11-27 14:12:41.474045] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:04.216 [2024-11-27 14:12:41.474061] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.216 [2024-11-27 14:12:41.476939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.216 [2024-11-27 14:12:41.476985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:04.216 pt2 00:13:04.216 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.216 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:04.216 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:04.216 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:04.216 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:04.216 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:04.216 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:04.216 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:04.216 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:04.216 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:04.216 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.216 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.475 malloc3 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.475 [2024-11-27 14:12:41.543586] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:04.475 [2024-11-27 14:12:41.543671] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.475 [2024-11-27 14:12:41.543706] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:04.475 [2024-11-27 14:12:41.543723] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.475 [2024-11-27 14:12:41.546467] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.475 [2024-11-27 14:12:41.546514] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:04.475 pt3 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.475 malloc4 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.475 [2024-11-27 14:12:41.599557] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:04.475 [2024-11-27 14:12:41.599812] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:04.475 [2024-11-27 14:12:41.599857] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:04.475 [2024-11-27 14:12:41.599875] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:04.475 [2024-11-27 14:12:41.602749] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:04.475 [2024-11-27 14:12:41.602935] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:04.475 pt4 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.475 [2024-11-27 14:12:41.611764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:04.475 [2024-11-27 14:12:41.614356] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:04.475 [2024-11-27 14:12:41.614478] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:04.475 [2024-11-27 14:12:41.614563] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:04.475 [2024-11-27 14:12:41.614852] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:04.475 [2024-11-27 14:12:41.614871] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:04.475 [2024-11-27 14:12:41.615215] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:04.475 [2024-11-27 14:12:41.615440] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:04.475 [2024-11-27 14:12:41.615462] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:04.475 [2024-11-27 14:12:41.615728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.475 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:04.475 "name": "raid_bdev1", 00:13:04.475 "uuid": "bcad2113-7e42-42ea-af5b-94328a7cf077", 00:13:04.475 "strip_size_kb": 64, 00:13:04.475 "state": "online", 00:13:04.475 "raid_level": "concat", 00:13:04.475 "superblock": true, 00:13:04.475 "num_base_bdevs": 4, 00:13:04.475 "num_base_bdevs_discovered": 4, 00:13:04.475 "num_base_bdevs_operational": 4, 00:13:04.475 "base_bdevs_list": [ 00:13:04.475 { 00:13:04.475 "name": "pt1", 00:13:04.475 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:04.475 "is_configured": true, 00:13:04.475 "data_offset": 2048, 00:13:04.475 "data_size": 63488 00:13:04.476 }, 00:13:04.476 { 00:13:04.476 "name": "pt2", 00:13:04.476 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:04.476 "is_configured": true, 00:13:04.476 "data_offset": 2048, 00:13:04.476 "data_size": 63488 00:13:04.476 }, 00:13:04.476 { 00:13:04.476 "name": "pt3", 00:13:04.476 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:04.476 "is_configured": true, 00:13:04.476 "data_offset": 2048, 00:13:04.476 "data_size": 63488 00:13:04.476 }, 00:13:04.476 { 00:13:04.476 "name": "pt4", 00:13:04.476 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:04.476 "is_configured": true, 00:13:04.476 "data_offset": 2048, 00:13:04.476 "data_size": 63488 00:13:04.476 } 00:13:04.476 ] 00:13:04.476 }' 00:13:04.476 14:12:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:04.476 14:12:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.135 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:05.135 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:05.135 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:05.135 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:05.135 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:05.135 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:05.135 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:05.135 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:05.135 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.135 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.135 [2024-11-27 14:12:42.140457] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:05.135 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.135 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:05.135 "name": "raid_bdev1", 00:13:05.135 "aliases": [ 00:13:05.135 "bcad2113-7e42-42ea-af5b-94328a7cf077" 00:13:05.135 ], 00:13:05.135 "product_name": "Raid Volume", 00:13:05.135 "block_size": 512, 00:13:05.135 "num_blocks": 253952, 00:13:05.135 "uuid": "bcad2113-7e42-42ea-af5b-94328a7cf077", 00:13:05.135 "assigned_rate_limits": { 00:13:05.135 "rw_ios_per_sec": 0, 00:13:05.135 "rw_mbytes_per_sec": 0, 00:13:05.135 "r_mbytes_per_sec": 0, 00:13:05.135 "w_mbytes_per_sec": 0 00:13:05.135 }, 00:13:05.135 "claimed": false, 00:13:05.135 "zoned": false, 00:13:05.135 "supported_io_types": { 00:13:05.135 "read": true, 00:13:05.135 "write": true, 00:13:05.135 "unmap": true, 00:13:05.135 "flush": true, 00:13:05.135 "reset": true, 00:13:05.135 "nvme_admin": false, 00:13:05.135 "nvme_io": false, 00:13:05.135 "nvme_io_md": false, 00:13:05.135 "write_zeroes": true, 00:13:05.135 "zcopy": false, 00:13:05.135 "get_zone_info": false, 00:13:05.135 "zone_management": false, 00:13:05.135 "zone_append": false, 00:13:05.135 "compare": false, 00:13:05.135 "compare_and_write": false, 00:13:05.135 "abort": false, 00:13:05.135 "seek_hole": false, 00:13:05.135 "seek_data": false, 00:13:05.135 "copy": false, 00:13:05.135 "nvme_iov_md": false 00:13:05.135 }, 00:13:05.135 "memory_domains": [ 00:13:05.135 { 00:13:05.135 "dma_device_id": "system", 00:13:05.135 "dma_device_type": 1 00:13:05.135 }, 00:13:05.135 { 00:13:05.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.135 "dma_device_type": 2 00:13:05.135 }, 00:13:05.135 { 00:13:05.135 "dma_device_id": "system", 00:13:05.135 "dma_device_type": 1 00:13:05.135 }, 00:13:05.135 { 00:13:05.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.135 "dma_device_type": 2 00:13:05.135 }, 00:13:05.135 { 00:13:05.135 "dma_device_id": "system", 00:13:05.135 "dma_device_type": 1 00:13:05.135 }, 00:13:05.135 { 00:13:05.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.135 "dma_device_type": 2 00:13:05.135 }, 00:13:05.135 { 00:13:05.135 "dma_device_id": "system", 00:13:05.135 "dma_device_type": 1 00:13:05.135 }, 00:13:05.135 { 00:13:05.135 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:05.135 "dma_device_type": 2 00:13:05.135 } 00:13:05.135 ], 00:13:05.135 "driver_specific": { 00:13:05.135 "raid": { 00:13:05.135 "uuid": "bcad2113-7e42-42ea-af5b-94328a7cf077", 00:13:05.135 "strip_size_kb": 64, 00:13:05.135 "state": "online", 00:13:05.135 "raid_level": "concat", 00:13:05.135 "superblock": true, 00:13:05.135 "num_base_bdevs": 4, 00:13:05.135 "num_base_bdevs_discovered": 4, 00:13:05.135 "num_base_bdevs_operational": 4, 00:13:05.135 "base_bdevs_list": [ 00:13:05.135 { 00:13:05.135 "name": "pt1", 00:13:05.135 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:05.135 "is_configured": true, 00:13:05.135 "data_offset": 2048, 00:13:05.135 "data_size": 63488 00:13:05.135 }, 00:13:05.135 { 00:13:05.135 "name": "pt2", 00:13:05.135 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:05.135 "is_configured": true, 00:13:05.135 "data_offset": 2048, 00:13:05.135 "data_size": 63488 00:13:05.135 }, 00:13:05.135 { 00:13:05.135 "name": "pt3", 00:13:05.135 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:05.135 "is_configured": true, 00:13:05.135 "data_offset": 2048, 00:13:05.135 "data_size": 63488 00:13:05.135 }, 00:13:05.135 { 00:13:05.135 "name": "pt4", 00:13:05.135 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:05.135 "is_configured": true, 00:13:05.135 "data_offset": 2048, 00:13:05.135 "data_size": 63488 00:13:05.135 } 00:13:05.135 ] 00:13:05.135 } 00:13:05.135 } 00:13:05.135 }' 00:13:05.135 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:05.135 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:05.135 pt2 00:13:05.135 pt3 00:13:05.135 pt4' 00:13:05.135 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:05.135 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:05.135 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:05.135 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:05.136 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:05.136 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.136 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.136 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.136 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:05.136 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:05.136 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:05.136 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:05.136 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:05.136 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.136 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.136 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.136 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:05.136 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:05.136 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:05.136 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:05.136 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.136 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.136 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.395 [2024-11-27 14:12:42.520448] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=bcad2113-7e42-42ea-af5b-94328a7cf077 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z bcad2113-7e42-42ea-af5b-94328a7cf077 ']' 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.395 [2024-11-27 14:12:42.568123] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:05.395 [2024-11-27 14:12:42.568159] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:05.395 [2024-11-27 14:12:42.568297] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:05.395 [2024-11-27 14:12:42.568388] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:05.395 [2024-11-27 14:12:42.568411] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.395 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.655 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.655 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:05.655 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:05.655 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:05.655 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:05.655 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:05.655 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.655 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:05.655 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:05.655 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:05.655 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.655 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.655 [2024-11-27 14:12:42.720214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:05.655 [2024-11-27 14:12:42.722835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:05.655 [2024-11-27 14:12:42.722904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:05.655 [2024-11-27 14:12:42.722959] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:05.655 [2024-11-27 14:12:42.723030] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:05.655 [2024-11-27 14:12:42.723156] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:05.655 [2024-11-27 14:12:42.723189] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:05.655 [2024-11-27 14:12:42.723235] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:05.655 [2024-11-27 14:12:42.723257] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:05.655 [2024-11-27 14:12:42.723273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:05.655 request: 00:13:05.655 { 00:13:05.655 "name": "raid_bdev1", 00:13:05.655 "raid_level": "concat", 00:13:05.655 "base_bdevs": [ 00:13:05.655 "malloc1", 00:13:05.655 "malloc2", 00:13:05.655 "malloc3", 00:13:05.655 "malloc4" 00:13:05.655 ], 00:13:05.655 "strip_size_kb": 64, 00:13:05.655 "superblock": false, 00:13:05.655 "method": "bdev_raid_create", 00:13:05.655 "req_id": 1 00:13:05.655 } 00:13:05.655 Got JSON-RPC error response 00:13:05.655 response: 00:13:05.655 { 00:13:05.655 "code": -17, 00:13:05.655 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:05.655 } 00:13:05.655 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:05.655 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:05.655 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:05.655 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:05.655 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:05.655 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.655 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.655 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.655 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:05.655 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.655 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:05.655 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:05.655 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:05.655 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.655 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.655 [2024-11-27 14:12:42.788277] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:05.655 [2024-11-27 14:12:42.788364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:05.655 [2024-11-27 14:12:42.788394] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:05.655 [2024-11-27 14:12:42.788411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:05.655 [2024-11-27 14:12:42.791370] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:05.656 [2024-11-27 14:12:42.791616] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:05.656 [2024-11-27 14:12:42.791735] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:05.656 [2024-11-27 14:12:42.791837] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:05.656 pt1 00:13:05.656 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.656 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:13:05.656 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:05.656 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:05.656 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:05.656 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:05.656 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:05.656 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:05.656 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:05.656 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:05.656 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:05.656 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:05.656 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.656 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.656 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:05.656 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.656 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:05.656 "name": "raid_bdev1", 00:13:05.656 "uuid": "bcad2113-7e42-42ea-af5b-94328a7cf077", 00:13:05.656 "strip_size_kb": 64, 00:13:05.656 "state": "configuring", 00:13:05.656 "raid_level": "concat", 00:13:05.656 "superblock": true, 00:13:05.656 "num_base_bdevs": 4, 00:13:05.656 "num_base_bdevs_discovered": 1, 00:13:05.656 "num_base_bdevs_operational": 4, 00:13:05.656 "base_bdevs_list": [ 00:13:05.656 { 00:13:05.656 "name": "pt1", 00:13:05.656 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:05.656 "is_configured": true, 00:13:05.656 "data_offset": 2048, 00:13:05.656 "data_size": 63488 00:13:05.656 }, 00:13:05.656 { 00:13:05.656 "name": null, 00:13:05.656 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:05.656 "is_configured": false, 00:13:05.656 "data_offset": 2048, 00:13:05.656 "data_size": 63488 00:13:05.656 }, 00:13:05.656 { 00:13:05.656 "name": null, 00:13:05.656 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:05.656 "is_configured": false, 00:13:05.656 "data_offset": 2048, 00:13:05.656 "data_size": 63488 00:13:05.656 }, 00:13:05.656 { 00:13:05.656 "name": null, 00:13:05.656 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:05.656 "is_configured": false, 00:13:05.656 "data_offset": 2048, 00:13:05.656 "data_size": 63488 00:13:05.656 } 00:13:05.656 ] 00:13:05.656 }' 00:13:05.656 14:12:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:05.656 14:12:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.224 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:06.224 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:06.224 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.224 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.224 [2024-11-27 14:12:43.280476] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:06.224 [2024-11-27 14:12:43.280580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.224 [2024-11-27 14:12:43.280610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:06.224 [2024-11-27 14:12:43.280628] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.224 [2024-11-27 14:12:43.281210] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.224 [2024-11-27 14:12:43.281260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:06.224 [2024-11-27 14:12:43.281380] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:06.224 [2024-11-27 14:12:43.281418] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:06.224 pt2 00:13:06.224 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.224 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:06.224 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.224 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.224 [2024-11-27 14:12:43.288456] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:06.224 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.224 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:13:06.224 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.224 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:06.224 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:06.224 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.224 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:06.224 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.224 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.224 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.224 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.224 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.224 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.224 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.224 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.224 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.224 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.224 "name": "raid_bdev1", 00:13:06.224 "uuid": "bcad2113-7e42-42ea-af5b-94328a7cf077", 00:13:06.224 "strip_size_kb": 64, 00:13:06.224 "state": "configuring", 00:13:06.224 "raid_level": "concat", 00:13:06.224 "superblock": true, 00:13:06.224 "num_base_bdevs": 4, 00:13:06.224 "num_base_bdevs_discovered": 1, 00:13:06.224 "num_base_bdevs_operational": 4, 00:13:06.224 "base_bdevs_list": [ 00:13:06.224 { 00:13:06.224 "name": "pt1", 00:13:06.224 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:06.224 "is_configured": true, 00:13:06.224 "data_offset": 2048, 00:13:06.224 "data_size": 63488 00:13:06.224 }, 00:13:06.224 { 00:13:06.224 "name": null, 00:13:06.224 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:06.224 "is_configured": false, 00:13:06.224 "data_offset": 0, 00:13:06.224 "data_size": 63488 00:13:06.224 }, 00:13:06.224 { 00:13:06.224 "name": null, 00:13:06.224 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:06.224 "is_configured": false, 00:13:06.224 "data_offset": 2048, 00:13:06.224 "data_size": 63488 00:13:06.224 }, 00:13:06.224 { 00:13:06.224 "name": null, 00:13:06.224 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:06.224 "is_configured": false, 00:13:06.225 "data_offset": 2048, 00:13:06.225 "data_size": 63488 00:13:06.225 } 00:13:06.225 ] 00:13:06.225 }' 00:13:06.225 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.225 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.810 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:06.810 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:06.810 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:06.810 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.810 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.810 [2024-11-27 14:12:43.844705] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:06.810 [2024-11-27 14:12:43.844813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.810 [2024-11-27 14:12:43.844846] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:06.810 [2024-11-27 14:12:43.844862] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.810 [2024-11-27 14:12:43.845412] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.810 [2024-11-27 14:12:43.845436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:06.810 [2024-11-27 14:12:43.845536] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:06.810 [2024-11-27 14:12:43.845566] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:06.810 pt2 00:13:06.810 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.810 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:06.810 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:06.811 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:06.811 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.811 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.811 [2024-11-27 14:12:43.852718] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:06.811 [2024-11-27 14:12:43.852986] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.811 [2024-11-27 14:12:43.853062] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:06.811 [2024-11-27 14:12:43.853315] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.811 [2024-11-27 14:12:43.853861] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.811 [2024-11-27 14:12:43.854025] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:06.811 [2024-11-27 14:12:43.854247] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:06.811 [2024-11-27 14:12:43.854408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:06.811 pt3 00:13:06.811 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.811 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:06.811 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:06.811 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:06.811 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.811 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.811 [2024-11-27 14:12:43.860654] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:06.811 [2024-11-27 14:12:43.860705] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:06.811 [2024-11-27 14:12:43.860747] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:06.811 [2024-11-27 14:12:43.860760] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:06.811 [2024-11-27 14:12:43.861266] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:06.811 [2024-11-27 14:12:43.861302] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:06.811 [2024-11-27 14:12:43.861387] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:06.811 [2024-11-27 14:12:43.861421] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:06.812 [2024-11-27 14:12:43.861589] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:06.812 [2024-11-27 14:12:43.861604] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:06.812 [2024-11-27 14:12:43.861933] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:06.812 [2024-11-27 14:12:43.862140] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:06.812 [2024-11-27 14:12:43.862162] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:06.812 [2024-11-27 14:12:43.862332] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:06.812 pt4 00:13:06.812 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.812 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:06.812 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:06.812 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:06.812 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:06.812 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:06.812 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:06.812 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:06.812 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:06.812 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:06.812 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:06.812 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:06.812 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:06.812 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:06.812 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:06.812 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.812 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.813 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.813 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:06.813 "name": "raid_bdev1", 00:13:06.813 "uuid": "bcad2113-7e42-42ea-af5b-94328a7cf077", 00:13:06.813 "strip_size_kb": 64, 00:13:06.813 "state": "online", 00:13:06.813 "raid_level": "concat", 00:13:06.813 "superblock": true, 00:13:06.813 "num_base_bdevs": 4, 00:13:06.813 "num_base_bdevs_discovered": 4, 00:13:06.813 "num_base_bdevs_operational": 4, 00:13:06.813 "base_bdevs_list": [ 00:13:06.813 { 00:13:06.813 "name": "pt1", 00:13:06.813 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:06.813 "is_configured": true, 00:13:06.813 "data_offset": 2048, 00:13:06.813 "data_size": 63488 00:13:06.813 }, 00:13:06.813 { 00:13:06.813 "name": "pt2", 00:13:06.813 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:06.813 "is_configured": true, 00:13:06.813 "data_offset": 2048, 00:13:06.813 "data_size": 63488 00:13:06.813 }, 00:13:06.813 { 00:13:06.813 "name": "pt3", 00:13:06.813 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:06.815 "is_configured": true, 00:13:06.815 "data_offset": 2048, 00:13:06.815 "data_size": 63488 00:13:06.815 }, 00:13:06.815 { 00:13:06.815 "name": "pt4", 00:13:06.815 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:06.815 "is_configured": true, 00:13:06.815 "data_offset": 2048, 00:13:06.815 "data_size": 63488 00:13:06.815 } 00:13:06.815 ] 00:13:06.815 }' 00:13:06.815 14:12:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:06.815 14:12:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.395 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:07.395 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:07.395 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:07.395 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:07.395 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:07.395 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:07.395 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:07.395 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:07.395 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.395 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.396 [2024-11-27 14:12:44.385359] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:07.396 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.396 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:07.396 "name": "raid_bdev1", 00:13:07.396 "aliases": [ 00:13:07.396 "bcad2113-7e42-42ea-af5b-94328a7cf077" 00:13:07.396 ], 00:13:07.396 "product_name": "Raid Volume", 00:13:07.396 "block_size": 512, 00:13:07.396 "num_blocks": 253952, 00:13:07.396 "uuid": "bcad2113-7e42-42ea-af5b-94328a7cf077", 00:13:07.396 "assigned_rate_limits": { 00:13:07.396 "rw_ios_per_sec": 0, 00:13:07.396 "rw_mbytes_per_sec": 0, 00:13:07.396 "r_mbytes_per_sec": 0, 00:13:07.396 "w_mbytes_per_sec": 0 00:13:07.396 }, 00:13:07.396 "claimed": false, 00:13:07.396 "zoned": false, 00:13:07.396 "supported_io_types": { 00:13:07.396 "read": true, 00:13:07.396 "write": true, 00:13:07.396 "unmap": true, 00:13:07.396 "flush": true, 00:13:07.396 "reset": true, 00:13:07.396 "nvme_admin": false, 00:13:07.396 "nvme_io": false, 00:13:07.396 "nvme_io_md": false, 00:13:07.396 "write_zeroes": true, 00:13:07.396 "zcopy": false, 00:13:07.396 "get_zone_info": false, 00:13:07.396 "zone_management": false, 00:13:07.396 "zone_append": false, 00:13:07.396 "compare": false, 00:13:07.396 "compare_and_write": false, 00:13:07.396 "abort": false, 00:13:07.396 "seek_hole": false, 00:13:07.396 "seek_data": false, 00:13:07.396 "copy": false, 00:13:07.396 "nvme_iov_md": false 00:13:07.396 }, 00:13:07.396 "memory_domains": [ 00:13:07.396 { 00:13:07.396 "dma_device_id": "system", 00:13:07.396 "dma_device_type": 1 00:13:07.396 }, 00:13:07.396 { 00:13:07.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.396 "dma_device_type": 2 00:13:07.396 }, 00:13:07.396 { 00:13:07.396 "dma_device_id": "system", 00:13:07.396 "dma_device_type": 1 00:13:07.396 }, 00:13:07.396 { 00:13:07.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.396 "dma_device_type": 2 00:13:07.396 }, 00:13:07.396 { 00:13:07.396 "dma_device_id": "system", 00:13:07.396 "dma_device_type": 1 00:13:07.396 }, 00:13:07.396 { 00:13:07.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.396 "dma_device_type": 2 00:13:07.396 }, 00:13:07.396 { 00:13:07.396 "dma_device_id": "system", 00:13:07.396 "dma_device_type": 1 00:13:07.396 }, 00:13:07.396 { 00:13:07.396 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:07.396 "dma_device_type": 2 00:13:07.396 } 00:13:07.396 ], 00:13:07.396 "driver_specific": { 00:13:07.396 "raid": { 00:13:07.396 "uuid": "bcad2113-7e42-42ea-af5b-94328a7cf077", 00:13:07.396 "strip_size_kb": 64, 00:13:07.396 "state": "online", 00:13:07.396 "raid_level": "concat", 00:13:07.396 "superblock": true, 00:13:07.396 "num_base_bdevs": 4, 00:13:07.396 "num_base_bdevs_discovered": 4, 00:13:07.396 "num_base_bdevs_operational": 4, 00:13:07.396 "base_bdevs_list": [ 00:13:07.396 { 00:13:07.396 "name": "pt1", 00:13:07.396 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:07.396 "is_configured": true, 00:13:07.396 "data_offset": 2048, 00:13:07.396 "data_size": 63488 00:13:07.396 }, 00:13:07.396 { 00:13:07.396 "name": "pt2", 00:13:07.396 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:07.396 "is_configured": true, 00:13:07.396 "data_offset": 2048, 00:13:07.396 "data_size": 63488 00:13:07.396 }, 00:13:07.396 { 00:13:07.396 "name": "pt3", 00:13:07.396 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:07.396 "is_configured": true, 00:13:07.396 "data_offset": 2048, 00:13:07.396 "data_size": 63488 00:13:07.396 }, 00:13:07.396 { 00:13:07.396 "name": "pt4", 00:13:07.396 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:07.396 "is_configured": true, 00:13:07.396 "data_offset": 2048, 00:13:07.396 "data_size": 63488 00:13:07.396 } 00:13:07.396 ] 00:13:07.396 } 00:13:07.396 } 00:13:07.396 }' 00:13:07.396 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:07.396 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:07.396 pt2 00:13:07.396 pt3 00:13:07.396 pt4' 00:13:07.396 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.396 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:07.396 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.396 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:07.396 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.396 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.396 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.396 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.396 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.396 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.396 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.396 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:07.396 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.396 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.396 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.396 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.396 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.396 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.396 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.396 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:07.396 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.396 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.396 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.655 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.655 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.655 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.655 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:07.655 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:07.655 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.655 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.655 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:07.655 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.655 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:07.655 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:07.655 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:07.655 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:07.655 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.655 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.655 [2024-11-27 14:12:44.769392] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:07.655 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.655 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' bcad2113-7e42-42ea-af5b-94328a7cf077 '!=' bcad2113-7e42-42ea-af5b-94328a7cf077 ']' 00:13:07.655 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy concat 00:13:07.655 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:07.655 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:07.655 14:12:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 72661 00:13:07.655 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 72661 ']' 00:13:07.655 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 72661 00:13:07.655 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:07.655 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:07.655 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72661 00:13:07.655 killing process with pid 72661 00:13:07.655 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:07.655 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:07.655 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72661' 00:13:07.655 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 72661 00:13:07.655 [2024-11-27 14:12:44.851722] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:07.655 14:12:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 72661 00:13:07.655 [2024-11-27 14:12:44.851840] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:07.655 [2024-11-27 14:12:44.851955] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:07.655 [2024-11-27 14:12:44.851972] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:08.223 [2024-11-27 14:12:45.208215] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:09.161 14:12:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:09.161 00:13:09.161 real 0m6.078s 00:13:09.161 user 0m9.238s 00:13:09.161 sys 0m0.857s 00:13:09.161 ************************************ 00:13:09.161 END TEST raid_superblock_test 00:13:09.161 ************************************ 00:13:09.161 14:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.161 14:12:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.161 14:12:46 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:13:09.161 14:12:46 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:09.161 14:12:46 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.161 14:12:46 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:09.161 ************************************ 00:13:09.161 START TEST raid_read_error_test 00:13:09.161 ************************************ 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 read 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.d0G8IEwUy8 00:13:09.161 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=72932 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 72932 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 72932 ']' 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:09.161 14:12:46 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.161 [2024-11-27 14:12:46.416893] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:13:09.161 [2024-11-27 14:12:46.417092] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72932 ] 00:13:09.420 [2024-11-27 14:12:46.599864] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:09.679 [2024-11-27 14:12:46.724180] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.679 [2024-11-27 14:12:46.917696] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:09.679 [2024-11-27 14:12:46.917794] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:10.357 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:10.357 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:10.357 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:10.357 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:10.357 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.357 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.357 BaseBdev1_malloc 00:13:10.357 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.357 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:10.357 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.357 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.357 true 00:13:10.357 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.357 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:10.357 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.357 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.357 [2024-11-27 14:12:47.429872] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:10.357 [2024-11-27 14:12:47.429957] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.357 [2024-11-27 14:12:47.429987] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:10.357 [2024-11-27 14:12:47.430004] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.357 [2024-11-27 14:12:47.432968] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.357 [2024-11-27 14:12:47.433019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:10.357 BaseBdev1 00:13:10.357 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.357 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:10.357 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:10.357 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.357 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.357 BaseBdev2_malloc 00:13:10.357 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.357 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:10.357 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.357 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.357 true 00:13:10.357 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.357 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:10.357 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.357 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.357 [2024-11-27 14:12:47.485043] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:10.358 [2024-11-27 14:12:47.485133] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.358 [2024-11-27 14:12:47.485174] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:10.358 [2024-11-27 14:12:47.485191] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.358 [2024-11-27 14:12:47.488044] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.358 [2024-11-27 14:12:47.488093] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:10.358 BaseBdev2 00:13:10.358 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.358 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:10.358 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:10.358 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.358 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.358 BaseBdev3_malloc 00:13:10.358 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.358 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:10.358 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.358 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.358 true 00:13:10.358 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.358 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:10.358 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.358 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.358 [2024-11-27 14:12:47.561190] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:10.358 [2024-11-27 14:12:47.561272] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.358 [2024-11-27 14:12:47.561298] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:10.358 [2024-11-27 14:12:47.561314] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.358 [2024-11-27 14:12:47.564124] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.358 [2024-11-27 14:12:47.564200] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:10.358 BaseBdev3 00:13:10.358 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.358 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:10.358 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:10.358 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.358 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.358 BaseBdev4_malloc 00:13:10.358 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.358 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:10.358 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.358 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.358 true 00:13:10.358 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.358 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:10.358 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.358 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.358 [2024-11-27 14:12:47.619567] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:10.358 [2024-11-27 14:12:47.619840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:10.358 [2024-11-27 14:12:47.619879] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:10.358 [2024-11-27 14:12:47.619898] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:10.358 [2024-11-27 14:12:47.622697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:10.358 [2024-11-27 14:12:47.622751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:10.358 BaseBdev4 00:13:10.358 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.358 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:10.358 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.358 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.358 [2024-11-27 14:12:47.631718] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:10.618 [2024-11-27 14:12:47.634349] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:10.618 [2024-11-27 14:12:47.634477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:10.618 [2024-11-27 14:12:47.634613] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:10.618 [2024-11-27 14:12:47.635009] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:10.618 [2024-11-27 14:12:47.635046] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:10.618 [2024-11-27 14:12:47.635344] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:10.618 [2024-11-27 14:12:47.635539] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:10.618 [2024-11-27 14:12:47.635556] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:10.618 [2024-11-27 14:12:47.635811] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:10.618 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.618 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:10.618 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:10.618 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:10.618 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:10.618 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:10.618 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:10.618 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:10.618 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:10.618 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:10.618 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:10.618 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:10.618 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.618 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:10.618 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.618 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.618 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:10.618 "name": "raid_bdev1", 00:13:10.618 "uuid": "26d977ce-7f4b-4d7a-80cd-ab78c62c2863", 00:13:10.618 "strip_size_kb": 64, 00:13:10.618 "state": "online", 00:13:10.618 "raid_level": "concat", 00:13:10.618 "superblock": true, 00:13:10.618 "num_base_bdevs": 4, 00:13:10.618 "num_base_bdevs_discovered": 4, 00:13:10.618 "num_base_bdevs_operational": 4, 00:13:10.618 "base_bdevs_list": [ 00:13:10.618 { 00:13:10.618 "name": "BaseBdev1", 00:13:10.618 "uuid": "bd1576c0-20f8-51ed-9a16-49fef3d84dfa", 00:13:10.618 "is_configured": true, 00:13:10.618 "data_offset": 2048, 00:13:10.618 "data_size": 63488 00:13:10.618 }, 00:13:10.618 { 00:13:10.618 "name": "BaseBdev2", 00:13:10.618 "uuid": "37531952-ddfb-52ba-9503-37a98c1e1fcf", 00:13:10.618 "is_configured": true, 00:13:10.618 "data_offset": 2048, 00:13:10.618 "data_size": 63488 00:13:10.618 }, 00:13:10.618 { 00:13:10.618 "name": "BaseBdev3", 00:13:10.618 "uuid": "b53c7069-a213-5c96-85e8-46a4501e5c67", 00:13:10.618 "is_configured": true, 00:13:10.618 "data_offset": 2048, 00:13:10.618 "data_size": 63488 00:13:10.618 }, 00:13:10.618 { 00:13:10.618 "name": "BaseBdev4", 00:13:10.618 "uuid": "6ec9426d-e774-58dd-b7a9-40568d377580", 00:13:10.618 "is_configured": true, 00:13:10.618 "data_offset": 2048, 00:13:10.618 "data_size": 63488 00:13:10.618 } 00:13:10.618 ] 00:13:10.618 }' 00:13:10.618 14:12:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:10.618 14:12:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.877 14:12:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:10.877 14:12:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:11.135 [2024-11-27 14:12:48.249429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:12.072 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:12.072 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.072 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.072 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.072 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:12.072 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:12.072 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:12.072 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:12.072 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:12.072 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:12.072 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:12.072 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:12.072 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:12.072 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:12.072 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:12.072 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:12.072 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:12.072 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:12.072 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:12.072 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.072 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.072 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.072 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:12.072 "name": "raid_bdev1", 00:13:12.072 "uuid": "26d977ce-7f4b-4d7a-80cd-ab78c62c2863", 00:13:12.072 "strip_size_kb": 64, 00:13:12.072 "state": "online", 00:13:12.072 "raid_level": "concat", 00:13:12.072 "superblock": true, 00:13:12.072 "num_base_bdevs": 4, 00:13:12.072 "num_base_bdevs_discovered": 4, 00:13:12.072 "num_base_bdevs_operational": 4, 00:13:12.072 "base_bdevs_list": [ 00:13:12.072 { 00:13:12.072 "name": "BaseBdev1", 00:13:12.072 "uuid": "bd1576c0-20f8-51ed-9a16-49fef3d84dfa", 00:13:12.072 "is_configured": true, 00:13:12.072 "data_offset": 2048, 00:13:12.072 "data_size": 63488 00:13:12.072 }, 00:13:12.072 { 00:13:12.072 "name": "BaseBdev2", 00:13:12.072 "uuid": "37531952-ddfb-52ba-9503-37a98c1e1fcf", 00:13:12.072 "is_configured": true, 00:13:12.073 "data_offset": 2048, 00:13:12.073 "data_size": 63488 00:13:12.073 }, 00:13:12.073 { 00:13:12.073 "name": "BaseBdev3", 00:13:12.073 "uuid": "b53c7069-a213-5c96-85e8-46a4501e5c67", 00:13:12.073 "is_configured": true, 00:13:12.073 "data_offset": 2048, 00:13:12.073 "data_size": 63488 00:13:12.073 }, 00:13:12.073 { 00:13:12.073 "name": "BaseBdev4", 00:13:12.073 "uuid": "6ec9426d-e774-58dd-b7a9-40568d377580", 00:13:12.073 "is_configured": true, 00:13:12.073 "data_offset": 2048, 00:13:12.073 "data_size": 63488 00:13:12.073 } 00:13:12.073 ] 00:13:12.073 }' 00:13:12.073 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:12.073 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.640 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:12.640 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.640 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.640 [2024-11-27 14:12:49.660891] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:12.640 [2024-11-27 14:12:49.660933] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:12.640 [2024-11-27 14:12:49.664456] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:12.640 [2024-11-27 14:12:49.664551] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:12.640 [2024-11-27 14:12:49.664612] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:12.640 [2024-11-27 14:12:49.664630] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:12.640 { 00:13:12.640 "results": [ 00:13:12.640 { 00:13:12.640 "job": "raid_bdev1", 00:13:12.640 "core_mask": "0x1", 00:13:12.640 "workload": "randrw", 00:13:12.640 "percentage": 50, 00:13:12.640 "status": "finished", 00:13:12.640 "queue_depth": 1, 00:13:12.640 "io_size": 131072, 00:13:12.640 "runtime": 1.408954, 00:13:12.640 "iops": 10387.848006393395, 00:13:12.640 "mibps": 1298.4810007991744, 00:13:12.640 "io_failed": 1, 00:13:12.640 "io_timeout": 0, 00:13:12.640 "avg_latency_us": 133.96221133242653, 00:13:12.640 "min_latency_us": 37.00363636363636, 00:13:12.640 "max_latency_us": 1951.1854545454546 00:13:12.640 } 00:13:12.640 ], 00:13:12.640 "core_count": 1 00:13:12.640 } 00:13:12.640 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.640 14:12:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 72932 00:13:12.640 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 72932 ']' 00:13:12.640 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 72932 00:13:12.640 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:12.640 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:12.640 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72932 00:13:12.640 killing process with pid 72932 00:13:12.640 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:12.640 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:12.640 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72932' 00:13:12.640 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 72932 00:13:12.640 [2024-11-27 14:12:49.697367] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:12.640 14:12:49 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 72932 00:13:12.899 [2024-11-27 14:12:49.975183] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:13.837 14:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:13.837 14:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.d0G8IEwUy8 00:13:13.837 14:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:13.837 ************************************ 00:13:13.837 END TEST raid_read_error_test 00:13:13.837 ************************************ 00:13:13.837 14:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.71 00:13:13.837 14:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:13.837 14:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:13.837 14:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:13.837 14:12:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.71 != \0\.\0\0 ]] 00:13:13.837 00:13:13.837 real 0m4.758s 00:13:13.837 user 0m5.857s 00:13:13.837 sys 0m0.587s 00:13:13.837 14:12:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:13.837 14:12:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:13.837 14:12:51 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:13:13.837 14:12:51 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:13.837 14:12:51 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:13.837 14:12:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:13.837 ************************************ 00:13:13.837 START TEST raid_write_error_test 00:13:13.837 ************************************ 00:13:13.837 14:12:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test concat 4 write 00:13:13.837 14:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=concat 00:13:13.837 14:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:13.837 14:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:13.837 14:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:13.837 14:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:13.837 14:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:13.837 14:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:13.838 14:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:13.838 14:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:13.838 14:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:13.838 14:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:13.838 14:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:13.838 14:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:13.838 14:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:13.838 14:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:13.838 14:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:14.096 14:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:14.097 14:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:14.097 14:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:14.097 14:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:14.097 14:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:14.097 14:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:14.097 14:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:14.097 14:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:14.097 14:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' concat '!=' raid1 ']' 00:13:14.097 14:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@801 -- # strip_size=64 00:13:14.097 14:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # create_arg+=' -z 64' 00:13:14.097 14:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:14.097 14:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.JZnhhEcBbC 00:13:14.097 14:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=73078 00:13:14.097 14:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 73078 00:13:14.097 14:12:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 73078 ']' 00:13:14.097 14:12:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:14.097 14:12:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.097 14:12:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:14.097 14:12:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.097 14:12:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:14.097 14:12:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.097 [2024-11-27 14:12:51.233078] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:13:14.097 [2024-11-27 14:12:51.233276] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73078 ] 00:13:14.355 [2024-11-27 14:12:51.419142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.355 [2024-11-27 14:12:51.548037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.689 [2024-11-27 14:12:51.736811] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:14.689 [2024-11-27 14:12:51.736887] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:14.947 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.206 BaseBdev1_malloc 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.206 true 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.206 [2024-11-27 14:12:52.283648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:15.206 [2024-11-27 14:12:52.283738] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.206 [2024-11-27 14:12:52.283804] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:15.206 [2024-11-27 14:12:52.283848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.206 [2024-11-27 14:12:52.286864] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.206 [2024-11-27 14:12:52.286912] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:15.206 BaseBdev1 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.206 BaseBdev2_malloc 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.206 true 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.206 [2024-11-27 14:12:52.353996] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:15.206 [2024-11-27 14:12:52.354060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.206 [2024-11-27 14:12:52.354085] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:15.206 [2024-11-27 14:12:52.354102] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.206 [2024-11-27 14:12:52.356962] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.206 [2024-11-27 14:12:52.357006] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:15.206 BaseBdev2 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.206 BaseBdev3_malloc 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.206 true 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.206 [2024-11-27 14:12:52.426874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:15.206 [2024-11-27 14:12:52.426966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.206 [2024-11-27 14:12:52.426993] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:15.206 [2024-11-27 14:12:52.427010] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.206 [2024-11-27 14:12:52.429776] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.206 [2024-11-27 14:12:52.429840] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:15.206 BaseBdev3 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.206 BaseBdev4_malloc 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.206 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.465 true 00:13:15.465 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.465 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:15.465 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.465 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.465 [2024-11-27 14:12:52.488675] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:15.465 [2024-11-27 14:12:52.488739] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:15.465 [2024-11-27 14:12:52.488766] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:15.465 [2024-11-27 14:12:52.488802] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:15.465 [2024-11-27 14:12:52.491626] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:15.465 [2024-11-27 14:12:52.491690] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:15.465 BaseBdev4 00:13:15.465 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.465 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -z 64 -r concat -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:15.465 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.465 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.465 [2024-11-27 14:12:52.496760] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:15.465 [2024-11-27 14:12:52.499284] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:15.465 [2024-11-27 14:12:52.499397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:15.465 [2024-11-27 14:12:52.499497] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:15.465 [2024-11-27 14:12:52.499828] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:15.465 [2024-11-27 14:12:52.499861] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:15.465 [2024-11-27 14:12:52.500187] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:15.465 [2024-11-27 14:12:52.500415] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:15.465 [2024-11-27 14:12:52.500444] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:15.465 [2024-11-27 14:12:52.500691] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:15.465 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.465 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:15.465 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:15.465 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:15.465 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:15.465 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:15.465 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:15.465 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:15.465 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:15.465 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:15.465 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:15.465 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:15.465 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:15.465 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:15.465 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:15.465 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:15.465 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:15.466 "name": "raid_bdev1", 00:13:15.466 "uuid": "c26286e0-d164-435f-a33d-464bbc0c5a6d", 00:13:15.466 "strip_size_kb": 64, 00:13:15.466 "state": "online", 00:13:15.466 "raid_level": "concat", 00:13:15.466 "superblock": true, 00:13:15.466 "num_base_bdevs": 4, 00:13:15.466 "num_base_bdevs_discovered": 4, 00:13:15.466 "num_base_bdevs_operational": 4, 00:13:15.466 "base_bdevs_list": [ 00:13:15.466 { 00:13:15.466 "name": "BaseBdev1", 00:13:15.466 "uuid": "2bf5e4cd-bde4-5145-a9a9-75ebe08eff29", 00:13:15.466 "is_configured": true, 00:13:15.466 "data_offset": 2048, 00:13:15.466 "data_size": 63488 00:13:15.466 }, 00:13:15.466 { 00:13:15.466 "name": "BaseBdev2", 00:13:15.466 "uuid": "2f81888d-0652-56b5-bab6-08ef4872f824", 00:13:15.466 "is_configured": true, 00:13:15.466 "data_offset": 2048, 00:13:15.466 "data_size": 63488 00:13:15.466 }, 00:13:15.466 { 00:13:15.466 "name": "BaseBdev3", 00:13:15.466 "uuid": "28136421-cba8-53f0-a17b-674974c1a0a4", 00:13:15.466 "is_configured": true, 00:13:15.466 "data_offset": 2048, 00:13:15.466 "data_size": 63488 00:13:15.466 }, 00:13:15.466 { 00:13:15.466 "name": "BaseBdev4", 00:13:15.466 "uuid": "26df35f1-38a2-5044-8177-d34b3c50d7c0", 00:13:15.466 "is_configured": true, 00:13:15.466 "data_offset": 2048, 00:13:15.466 "data_size": 63488 00:13:15.466 } 00:13:15.466 ] 00:13:15.466 }' 00:13:15.466 14:12:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:15.466 14:12:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.033 14:12:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:16.033 14:12:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:16.033 [2024-11-27 14:12:53.146310] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:16.969 14:12:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:13:16.969 14:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.969 14:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.969 14:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.969 14:12:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:16.969 14:12:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ concat = \r\a\i\d\1 ]] 00:13:16.969 14:12:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:16.969 14:12:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:13:16.969 14:12:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:16.969 14:12:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:16.969 14:12:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=concat 00:13:16.969 14:12:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:13:16.969 14:12:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:16.969 14:12:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:16.969 14:12:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:16.969 14:12:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:16.969 14:12:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:16.969 14:12:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:16.969 14:12:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:16.969 14:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:16.969 14:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:16.969 14:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:16.969 14:12:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:16.969 "name": "raid_bdev1", 00:13:16.969 "uuid": "c26286e0-d164-435f-a33d-464bbc0c5a6d", 00:13:16.969 "strip_size_kb": 64, 00:13:16.969 "state": "online", 00:13:16.969 "raid_level": "concat", 00:13:16.969 "superblock": true, 00:13:16.969 "num_base_bdevs": 4, 00:13:16.969 "num_base_bdevs_discovered": 4, 00:13:16.969 "num_base_bdevs_operational": 4, 00:13:16.969 "base_bdevs_list": [ 00:13:16.969 { 00:13:16.969 "name": "BaseBdev1", 00:13:16.969 "uuid": "2bf5e4cd-bde4-5145-a9a9-75ebe08eff29", 00:13:16.969 "is_configured": true, 00:13:16.969 "data_offset": 2048, 00:13:16.969 "data_size": 63488 00:13:16.969 }, 00:13:16.969 { 00:13:16.969 "name": "BaseBdev2", 00:13:16.969 "uuid": "2f81888d-0652-56b5-bab6-08ef4872f824", 00:13:16.969 "is_configured": true, 00:13:16.969 "data_offset": 2048, 00:13:16.969 "data_size": 63488 00:13:16.969 }, 00:13:16.969 { 00:13:16.969 "name": "BaseBdev3", 00:13:16.969 "uuid": "28136421-cba8-53f0-a17b-674974c1a0a4", 00:13:16.969 "is_configured": true, 00:13:16.969 "data_offset": 2048, 00:13:16.969 "data_size": 63488 00:13:16.969 }, 00:13:16.969 { 00:13:16.969 "name": "BaseBdev4", 00:13:16.969 "uuid": "26df35f1-38a2-5044-8177-d34b3c50d7c0", 00:13:16.969 "is_configured": true, 00:13:16.969 "data_offset": 2048, 00:13:16.969 "data_size": 63488 00:13:16.969 } 00:13:16.969 ] 00:13:16.969 }' 00:13:16.969 14:12:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:16.969 14:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.548 14:12:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:17.548 14:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:17.548 14:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:17.548 [2024-11-27 14:12:54.598515] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:17.548 [2024-11-27 14:12:54.598573] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:17.548 [2024-11-27 14:12:54.602278] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:17.548 [2024-11-27 14:12:54.602372] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:17.548 [2024-11-27 14:12:54.602431] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:17.548 [2024-11-27 14:12:54.602481] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:17.548 14:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:17.548 { 00:13:17.548 "results": [ 00:13:17.548 { 00:13:17.548 "job": "raid_bdev1", 00:13:17.548 "core_mask": "0x1", 00:13:17.548 "workload": "randrw", 00:13:17.548 "percentage": 50, 00:13:17.548 "status": "finished", 00:13:17.548 "queue_depth": 1, 00:13:17.548 "io_size": 131072, 00:13:17.548 "runtime": 1.449893, 00:13:17.548 "iops": 10044.189467774519, 00:13:17.548 "mibps": 1255.5236834718148, 00:13:17.548 "io_failed": 1, 00:13:17.548 "io_timeout": 0, 00:13:17.548 "avg_latency_us": 138.31889940326084, 00:13:17.548 "min_latency_us": 39.56363636363636, 00:13:17.548 "max_latency_us": 1995.8690909090908 00:13:17.548 } 00:13:17.548 ], 00:13:17.548 "core_count": 1 00:13:17.548 } 00:13:17.548 14:12:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 73078 00:13:17.548 14:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 73078 ']' 00:13:17.548 14:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 73078 00:13:17.548 14:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:13:17.548 14:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:17.548 14:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73078 00:13:17.548 14:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:17.548 14:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:17.548 killing process with pid 73078 00:13:17.548 14:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73078' 00:13:17.548 14:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 73078 00:13:17.548 [2024-11-27 14:12:54.636802] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:17.548 14:12:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 73078 00:13:17.808 [2024-11-27 14:12:54.916891] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:19.185 14:12:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:19.185 14:12:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.JZnhhEcBbC 00:13:19.185 14:12:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:19.185 14:12:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.69 00:13:19.185 14:12:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy concat 00:13:19.185 14:12:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:19.185 14:12:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@200 -- # return 1 00:13:19.185 14:12:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@849 -- # [[ 0.69 != \0\.\0\0 ]] 00:13:19.185 00:13:19.185 real 0m4.924s 00:13:19.185 user 0m6.104s 00:13:19.185 sys 0m0.616s 00:13:19.185 14:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:19.185 ************************************ 00:13:19.185 14:12:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.185 END TEST raid_write_error_test 00:13:19.185 ************************************ 00:13:19.185 14:12:56 bdev_raid -- bdev/bdev_raid.sh@967 -- # for level in raid0 concat raid1 00:13:19.185 14:12:56 bdev_raid -- bdev/bdev_raid.sh@968 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:13:19.185 14:12:56 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:19.185 14:12:56 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:19.185 14:12:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:19.185 ************************************ 00:13:19.185 START TEST raid_state_function_test 00:13:19.185 ************************************ 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 false 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=73226 00:13:19.185 Process raid pid: 73226 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73226' 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 73226 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 73226 ']' 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:19.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:19.185 14:12:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:19.185 [2024-11-27 14:12:56.208803] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:13:19.185 [2024-11-27 14:12:56.208970] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:19.186 [2024-11-27 14:12:56.402022] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:19.444 [2024-11-27 14:12:56.560870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:19.703 [2024-11-27 14:12:56.765672] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:19.703 [2024-11-27 14:12:56.765741] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:20.272 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:20.272 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:13:20.272 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:20.272 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.272 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.272 [2024-11-27 14:12:57.285355] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:20.272 [2024-11-27 14:12:57.285426] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:20.272 [2024-11-27 14:12:57.285442] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:20.272 [2024-11-27 14:12:57.285458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:20.272 [2024-11-27 14:12:57.285468] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:20.273 [2024-11-27 14:12:57.285481] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:20.273 [2024-11-27 14:12:57.285490] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:20.273 [2024-11-27 14:12:57.285504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:20.273 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.273 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:20.273 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:20.273 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:20.273 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.273 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.273 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:20.273 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.273 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.273 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.273 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.273 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.273 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.273 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.273 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.273 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.273 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.273 "name": "Existed_Raid", 00:13:20.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.273 "strip_size_kb": 0, 00:13:20.273 "state": "configuring", 00:13:20.273 "raid_level": "raid1", 00:13:20.273 "superblock": false, 00:13:20.273 "num_base_bdevs": 4, 00:13:20.273 "num_base_bdevs_discovered": 0, 00:13:20.273 "num_base_bdevs_operational": 4, 00:13:20.273 "base_bdevs_list": [ 00:13:20.273 { 00:13:20.273 "name": "BaseBdev1", 00:13:20.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.273 "is_configured": false, 00:13:20.273 "data_offset": 0, 00:13:20.273 "data_size": 0 00:13:20.273 }, 00:13:20.273 { 00:13:20.273 "name": "BaseBdev2", 00:13:20.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.273 "is_configured": false, 00:13:20.273 "data_offset": 0, 00:13:20.273 "data_size": 0 00:13:20.273 }, 00:13:20.273 { 00:13:20.273 "name": "BaseBdev3", 00:13:20.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.273 "is_configured": false, 00:13:20.273 "data_offset": 0, 00:13:20.273 "data_size": 0 00:13:20.273 }, 00:13:20.273 { 00:13:20.273 "name": "BaseBdev4", 00:13:20.273 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.273 "is_configured": false, 00:13:20.273 "data_offset": 0, 00:13:20.273 "data_size": 0 00:13:20.273 } 00:13:20.273 ] 00:13:20.273 }' 00:13:20.273 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.273 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.842 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:20.842 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.842 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.842 [2024-11-27 14:12:57.817481] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:20.842 [2024-11-27 14:12:57.817533] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:20.842 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.842 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:20.842 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.842 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.842 [2024-11-27 14:12:57.825453] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:20.842 [2024-11-27 14:12:57.825516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:20.842 [2024-11-27 14:12:57.825530] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:20.842 [2024-11-27 14:12:57.825546] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:20.842 [2024-11-27 14:12:57.825555] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:20.842 [2024-11-27 14:12:57.825569] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:20.842 [2024-11-27 14:12:57.825578] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:20.842 [2024-11-27 14:12:57.825592] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.843 [2024-11-27 14:12:57.870706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:20.843 BaseBdev1 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.843 [ 00:13:20.843 { 00:13:20.843 "name": "BaseBdev1", 00:13:20.843 "aliases": [ 00:13:20.843 "5bfbdf59-6066-4a34-af40-89e0d47d65f9" 00:13:20.843 ], 00:13:20.843 "product_name": "Malloc disk", 00:13:20.843 "block_size": 512, 00:13:20.843 "num_blocks": 65536, 00:13:20.843 "uuid": "5bfbdf59-6066-4a34-af40-89e0d47d65f9", 00:13:20.843 "assigned_rate_limits": { 00:13:20.843 "rw_ios_per_sec": 0, 00:13:20.843 "rw_mbytes_per_sec": 0, 00:13:20.843 "r_mbytes_per_sec": 0, 00:13:20.843 "w_mbytes_per_sec": 0 00:13:20.843 }, 00:13:20.843 "claimed": true, 00:13:20.843 "claim_type": "exclusive_write", 00:13:20.843 "zoned": false, 00:13:20.843 "supported_io_types": { 00:13:20.843 "read": true, 00:13:20.843 "write": true, 00:13:20.843 "unmap": true, 00:13:20.843 "flush": true, 00:13:20.843 "reset": true, 00:13:20.843 "nvme_admin": false, 00:13:20.843 "nvme_io": false, 00:13:20.843 "nvme_io_md": false, 00:13:20.843 "write_zeroes": true, 00:13:20.843 "zcopy": true, 00:13:20.843 "get_zone_info": false, 00:13:20.843 "zone_management": false, 00:13:20.843 "zone_append": false, 00:13:20.843 "compare": false, 00:13:20.843 "compare_and_write": false, 00:13:20.843 "abort": true, 00:13:20.843 "seek_hole": false, 00:13:20.843 "seek_data": false, 00:13:20.843 "copy": true, 00:13:20.843 "nvme_iov_md": false 00:13:20.843 }, 00:13:20.843 "memory_domains": [ 00:13:20.843 { 00:13:20.843 "dma_device_id": "system", 00:13:20.843 "dma_device_type": 1 00:13:20.843 }, 00:13:20.843 { 00:13:20.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.843 "dma_device_type": 2 00:13:20.843 } 00:13:20.843 ], 00:13:20.843 "driver_specific": {} 00:13:20.843 } 00:13:20.843 ] 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:20.843 "name": "Existed_Raid", 00:13:20.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.843 "strip_size_kb": 0, 00:13:20.843 "state": "configuring", 00:13:20.843 "raid_level": "raid1", 00:13:20.843 "superblock": false, 00:13:20.843 "num_base_bdevs": 4, 00:13:20.843 "num_base_bdevs_discovered": 1, 00:13:20.843 "num_base_bdevs_operational": 4, 00:13:20.843 "base_bdevs_list": [ 00:13:20.843 { 00:13:20.843 "name": "BaseBdev1", 00:13:20.843 "uuid": "5bfbdf59-6066-4a34-af40-89e0d47d65f9", 00:13:20.843 "is_configured": true, 00:13:20.843 "data_offset": 0, 00:13:20.843 "data_size": 65536 00:13:20.843 }, 00:13:20.843 { 00:13:20.843 "name": "BaseBdev2", 00:13:20.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.843 "is_configured": false, 00:13:20.843 "data_offset": 0, 00:13:20.843 "data_size": 0 00:13:20.843 }, 00:13:20.843 { 00:13:20.843 "name": "BaseBdev3", 00:13:20.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.843 "is_configured": false, 00:13:20.843 "data_offset": 0, 00:13:20.843 "data_size": 0 00:13:20.843 }, 00:13:20.843 { 00:13:20.843 "name": "BaseBdev4", 00:13:20.843 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:20.843 "is_configured": false, 00:13:20.843 "data_offset": 0, 00:13:20.843 "data_size": 0 00:13:20.843 } 00:13:20.843 ] 00:13:20.843 }' 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:20.843 14:12:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.412 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:21.412 14:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.412 14:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.412 [2024-11-27 14:12:58.442924] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:21.412 [2024-11-27 14:12:58.442988] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:21.412 14:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.412 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:21.412 14:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.412 14:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.412 [2024-11-27 14:12:58.450980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:21.412 [2024-11-27 14:12:58.453358] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:21.412 [2024-11-27 14:12:58.453409] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:21.412 [2024-11-27 14:12:58.453425] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:21.412 [2024-11-27 14:12:58.453441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:21.412 [2024-11-27 14:12:58.453451] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:21.412 [2024-11-27 14:12:58.453465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:21.412 14:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.412 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:21.412 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:21.412 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:21.412 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:21.412 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:21.412 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:21.412 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:21.412 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:21.412 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:21.412 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:21.412 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:21.412 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:21.412 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:21.412 14:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:21.412 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.412 14:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:21.412 14:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:21.412 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:21.412 "name": "Existed_Raid", 00:13:21.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.412 "strip_size_kb": 0, 00:13:21.412 "state": "configuring", 00:13:21.412 "raid_level": "raid1", 00:13:21.412 "superblock": false, 00:13:21.412 "num_base_bdevs": 4, 00:13:21.412 "num_base_bdevs_discovered": 1, 00:13:21.412 "num_base_bdevs_operational": 4, 00:13:21.412 "base_bdevs_list": [ 00:13:21.412 { 00:13:21.412 "name": "BaseBdev1", 00:13:21.412 "uuid": "5bfbdf59-6066-4a34-af40-89e0d47d65f9", 00:13:21.412 "is_configured": true, 00:13:21.412 "data_offset": 0, 00:13:21.412 "data_size": 65536 00:13:21.412 }, 00:13:21.412 { 00:13:21.412 "name": "BaseBdev2", 00:13:21.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.412 "is_configured": false, 00:13:21.412 "data_offset": 0, 00:13:21.412 "data_size": 0 00:13:21.412 }, 00:13:21.412 { 00:13:21.412 "name": "BaseBdev3", 00:13:21.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.412 "is_configured": false, 00:13:21.412 "data_offset": 0, 00:13:21.412 "data_size": 0 00:13:21.412 }, 00:13:21.412 { 00:13:21.412 "name": "BaseBdev4", 00:13:21.412 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.412 "is_configured": false, 00:13:21.412 "data_offset": 0, 00:13:21.412 "data_size": 0 00:13:21.412 } 00:13:21.412 ] 00:13:21.412 }' 00:13:21.412 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:21.412 14:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.016 14:12:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:22.016 14:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.016 14:12:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.016 [2024-11-27 14:12:59.033655] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:22.016 BaseBdev2 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.016 [ 00:13:22.016 { 00:13:22.016 "name": "BaseBdev2", 00:13:22.016 "aliases": [ 00:13:22.016 "9c0b8bd5-1adc-4d27-90e6-2215fe92d186" 00:13:22.016 ], 00:13:22.016 "product_name": "Malloc disk", 00:13:22.016 "block_size": 512, 00:13:22.016 "num_blocks": 65536, 00:13:22.016 "uuid": "9c0b8bd5-1adc-4d27-90e6-2215fe92d186", 00:13:22.016 "assigned_rate_limits": { 00:13:22.016 "rw_ios_per_sec": 0, 00:13:22.016 "rw_mbytes_per_sec": 0, 00:13:22.016 "r_mbytes_per_sec": 0, 00:13:22.016 "w_mbytes_per_sec": 0 00:13:22.016 }, 00:13:22.016 "claimed": true, 00:13:22.016 "claim_type": "exclusive_write", 00:13:22.016 "zoned": false, 00:13:22.016 "supported_io_types": { 00:13:22.016 "read": true, 00:13:22.016 "write": true, 00:13:22.016 "unmap": true, 00:13:22.016 "flush": true, 00:13:22.016 "reset": true, 00:13:22.016 "nvme_admin": false, 00:13:22.016 "nvme_io": false, 00:13:22.016 "nvme_io_md": false, 00:13:22.016 "write_zeroes": true, 00:13:22.016 "zcopy": true, 00:13:22.016 "get_zone_info": false, 00:13:22.016 "zone_management": false, 00:13:22.016 "zone_append": false, 00:13:22.016 "compare": false, 00:13:22.016 "compare_and_write": false, 00:13:22.016 "abort": true, 00:13:22.016 "seek_hole": false, 00:13:22.016 "seek_data": false, 00:13:22.016 "copy": true, 00:13:22.016 "nvme_iov_md": false 00:13:22.016 }, 00:13:22.016 "memory_domains": [ 00:13:22.016 { 00:13:22.016 "dma_device_id": "system", 00:13:22.016 "dma_device_type": 1 00:13:22.016 }, 00:13:22.016 { 00:13:22.016 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.016 "dma_device_type": 2 00:13:22.016 } 00:13:22.016 ], 00:13:22.016 "driver_specific": {} 00:13:22.016 } 00:13:22.016 ] 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.016 "name": "Existed_Raid", 00:13:22.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.016 "strip_size_kb": 0, 00:13:22.016 "state": "configuring", 00:13:22.016 "raid_level": "raid1", 00:13:22.016 "superblock": false, 00:13:22.016 "num_base_bdevs": 4, 00:13:22.016 "num_base_bdevs_discovered": 2, 00:13:22.016 "num_base_bdevs_operational": 4, 00:13:22.016 "base_bdevs_list": [ 00:13:22.016 { 00:13:22.016 "name": "BaseBdev1", 00:13:22.016 "uuid": "5bfbdf59-6066-4a34-af40-89e0d47d65f9", 00:13:22.016 "is_configured": true, 00:13:22.016 "data_offset": 0, 00:13:22.016 "data_size": 65536 00:13:22.016 }, 00:13:22.016 { 00:13:22.016 "name": "BaseBdev2", 00:13:22.016 "uuid": "9c0b8bd5-1adc-4d27-90e6-2215fe92d186", 00:13:22.016 "is_configured": true, 00:13:22.016 "data_offset": 0, 00:13:22.016 "data_size": 65536 00:13:22.016 }, 00:13:22.016 { 00:13:22.016 "name": "BaseBdev3", 00:13:22.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.016 "is_configured": false, 00:13:22.016 "data_offset": 0, 00:13:22.016 "data_size": 0 00:13:22.016 }, 00:13:22.016 { 00:13:22.016 "name": "BaseBdev4", 00:13:22.016 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.016 "is_configured": false, 00:13:22.016 "data_offset": 0, 00:13:22.016 "data_size": 0 00:13:22.016 } 00:13:22.016 ] 00:13:22.016 }' 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.016 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.584 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:22.584 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.584 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.584 [2024-11-27 14:12:59.616847] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:22.584 BaseBdev3 00:13:22.584 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.584 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:22.584 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:22.584 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:22.584 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:22.584 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:22.584 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:22.584 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:22.584 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.584 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.584 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.584 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:22.584 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.584 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.584 [ 00:13:22.584 { 00:13:22.584 "name": "BaseBdev3", 00:13:22.584 "aliases": [ 00:13:22.584 "72a59c2b-06ee-4adc-8db1-560eb2ad178f" 00:13:22.584 ], 00:13:22.584 "product_name": "Malloc disk", 00:13:22.584 "block_size": 512, 00:13:22.584 "num_blocks": 65536, 00:13:22.584 "uuid": "72a59c2b-06ee-4adc-8db1-560eb2ad178f", 00:13:22.584 "assigned_rate_limits": { 00:13:22.584 "rw_ios_per_sec": 0, 00:13:22.584 "rw_mbytes_per_sec": 0, 00:13:22.584 "r_mbytes_per_sec": 0, 00:13:22.584 "w_mbytes_per_sec": 0 00:13:22.584 }, 00:13:22.584 "claimed": true, 00:13:22.584 "claim_type": "exclusive_write", 00:13:22.584 "zoned": false, 00:13:22.584 "supported_io_types": { 00:13:22.584 "read": true, 00:13:22.584 "write": true, 00:13:22.584 "unmap": true, 00:13:22.584 "flush": true, 00:13:22.584 "reset": true, 00:13:22.584 "nvme_admin": false, 00:13:22.584 "nvme_io": false, 00:13:22.584 "nvme_io_md": false, 00:13:22.584 "write_zeroes": true, 00:13:22.584 "zcopy": true, 00:13:22.584 "get_zone_info": false, 00:13:22.584 "zone_management": false, 00:13:22.584 "zone_append": false, 00:13:22.584 "compare": false, 00:13:22.584 "compare_and_write": false, 00:13:22.584 "abort": true, 00:13:22.584 "seek_hole": false, 00:13:22.584 "seek_data": false, 00:13:22.584 "copy": true, 00:13:22.584 "nvme_iov_md": false 00:13:22.584 }, 00:13:22.584 "memory_domains": [ 00:13:22.584 { 00:13:22.584 "dma_device_id": "system", 00:13:22.584 "dma_device_type": 1 00:13:22.584 }, 00:13:22.584 { 00:13:22.584 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.584 "dma_device_type": 2 00:13:22.584 } 00:13:22.584 ], 00:13:22.584 "driver_specific": {} 00:13:22.584 } 00:13:22.584 ] 00:13:22.584 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.584 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:22.584 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:22.584 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:22.584 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:22.584 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:22.584 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:22.584 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:22.584 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:22.584 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:22.584 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:22.584 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:22.584 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:22.584 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:22.584 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:22.584 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.584 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.585 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:22.585 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.585 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:22.585 "name": "Existed_Raid", 00:13:22.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.585 "strip_size_kb": 0, 00:13:22.585 "state": "configuring", 00:13:22.585 "raid_level": "raid1", 00:13:22.585 "superblock": false, 00:13:22.585 "num_base_bdevs": 4, 00:13:22.585 "num_base_bdevs_discovered": 3, 00:13:22.585 "num_base_bdevs_operational": 4, 00:13:22.585 "base_bdevs_list": [ 00:13:22.585 { 00:13:22.585 "name": "BaseBdev1", 00:13:22.585 "uuid": "5bfbdf59-6066-4a34-af40-89e0d47d65f9", 00:13:22.585 "is_configured": true, 00:13:22.585 "data_offset": 0, 00:13:22.585 "data_size": 65536 00:13:22.585 }, 00:13:22.585 { 00:13:22.585 "name": "BaseBdev2", 00:13:22.585 "uuid": "9c0b8bd5-1adc-4d27-90e6-2215fe92d186", 00:13:22.585 "is_configured": true, 00:13:22.585 "data_offset": 0, 00:13:22.585 "data_size": 65536 00:13:22.585 }, 00:13:22.585 { 00:13:22.585 "name": "BaseBdev3", 00:13:22.585 "uuid": "72a59c2b-06ee-4adc-8db1-560eb2ad178f", 00:13:22.585 "is_configured": true, 00:13:22.585 "data_offset": 0, 00:13:22.585 "data_size": 65536 00:13:22.585 }, 00:13:22.585 { 00:13:22.585 "name": "BaseBdev4", 00:13:22.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:22.585 "is_configured": false, 00:13:22.585 "data_offset": 0, 00:13:22.585 "data_size": 0 00:13:22.585 } 00:13:22.585 ] 00:13:22.585 }' 00:13:22.585 14:12:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:22.585 14:12:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.154 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:23.154 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.154 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.154 [2024-11-27 14:13:00.207482] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:23.154 [2024-11-27 14:13:00.207568] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:23.154 [2024-11-27 14:13:00.207581] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:23.154 [2024-11-27 14:13:00.207960] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:23.154 [2024-11-27 14:13:00.208190] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:23.154 [2024-11-27 14:13:00.208225] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:23.154 [2024-11-27 14:13:00.208551] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:23.154 BaseBdev4 00:13:23.154 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.154 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:23.154 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:23.154 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:23.154 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:23.155 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:23.155 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:23.155 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:23.155 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.155 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.155 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.155 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:23.155 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.155 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.155 [ 00:13:23.155 { 00:13:23.155 "name": "BaseBdev4", 00:13:23.155 "aliases": [ 00:13:23.155 "19037425-756e-41b1-b506-dd1c4ee498e0" 00:13:23.155 ], 00:13:23.155 "product_name": "Malloc disk", 00:13:23.155 "block_size": 512, 00:13:23.155 "num_blocks": 65536, 00:13:23.155 "uuid": "19037425-756e-41b1-b506-dd1c4ee498e0", 00:13:23.155 "assigned_rate_limits": { 00:13:23.155 "rw_ios_per_sec": 0, 00:13:23.155 "rw_mbytes_per_sec": 0, 00:13:23.155 "r_mbytes_per_sec": 0, 00:13:23.155 "w_mbytes_per_sec": 0 00:13:23.155 }, 00:13:23.155 "claimed": true, 00:13:23.155 "claim_type": "exclusive_write", 00:13:23.155 "zoned": false, 00:13:23.155 "supported_io_types": { 00:13:23.155 "read": true, 00:13:23.155 "write": true, 00:13:23.155 "unmap": true, 00:13:23.155 "flush": true, 00:13:23.155 "reset": true, 00:13:23.155 "nvme_admin": false, 00:13:23.155 "nvme_io": false, 00:13:23.155 "nvme_io_md": false, 00:13:23.155 "write_zeroes": true, 00:13:23.155 "zcopy": true, 00:13:23.155 "get_zone_info": false, 00:13:23.155 "zone_management": false, 00:13:23.155 "zone_append": false, 00:13:23.155 "compare": false, 00:13:23.155 "compare_and_write": false, 00:13:23.155 "abort": true, 00:13:23.155 "seek_hole": false, 00:13:23.155 "seek_data": false, 00:13:23.155 "copy": true, 00:13:23.155 "nvme_iov_md": false 00:13:23.155 }, 00:13:23.155 "memory_domains": [ 00:13:23.155 { 00:13:23.155 "dma_device_id": "system", 00:13:23.155 "dma_device_type": 1 00:13:23.155 }, 00:13:23.155 { 00:13:23.155 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.155 "dma_device_type": 2 00:13:23.155 } 00:13:23.155 ], 00:13:23.155 "driver_specific": {} 00:13:23.155 } 00:13:23.155 ] 00:13:23.155 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.155 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:23.155 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:23.155 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:23.155 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:23.155 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.155 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.155 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.155 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.155 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:23.155 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.155 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.155 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.155 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.155 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.155 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.155 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.155 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.155 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.155 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:23.155 "name": "Existed_Raid", 00:13:23.155 "uuid": "56f4f961-2f65-4223-aa3d-c781794dcc84", 00:13:23.155 "strip_size_kb": 0, 00:13:23.155 "state": "online", 00:13:23.155 "raid_level": "raid1", 00:13:23.155 "superblock": false, 00:13:23.155 "num_base_bdevs": 4, 00:13:23.155 "num_base_bdevs_discovered": 4, 00:13:23.155 "num_base_bdevs_operational": 4, 00:13:23.155 "base_bdevs_list": [ 00:13:23.155 { 00:13:23.155 "name": "BaseBdev1", 00:13:23.155 "uuid": "5bfbdf59-6066-4a34-af40-89e0d47d65f9", 00:13:23.155 "is_configured": true, 00:13:23.155 "data_offset": 0, 00:13:23.155 "data_size": 65536 00:13:23.155 }, 00:13:23.155 { 00:13:23.155 "name": "BaseBdev2", 00:13:23.155 "uuid": "9c0b8bd5-1adc-4d27-90e6-2215fe92d186", 00:13:23.155 "is_configured": true, 00:13:23.155 "data_offset": 0, 00:13:23.155 "data_size": 65536 00:13:23.155 }, 00:13:23.155 { 00:13:23.155 "name": "BaseBdev3", 00:13:23.155 "uuid": "72a59c2b-06ee-4adc-8db1-560eb2ad178f", 00:13:23.155 "is_configured": true, 00:13:23.155 "data_offset": 0, 00:13:23.155 "data_size": 65536 00:13:23.155 }, 00:13:23.155 { 00:13:23.155 "name": "BaseBdev4", 00:13:23.155 "uuid": "19037425-756e-41b1-b506-dd1c4ee498e0", 00:13:23.155 "is_configured": true, 00:13:23.155 "data_offset": 0, 00:13:23.155 "data_size": 65536 00:13:23.155 } 00:13:23.155 ] 00:13:23.155 }' 00:13:23.155 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:23.155 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.722 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:23.722 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:23.722 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:23.722 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:23.722 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:23.722 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:23.722 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:23.722 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:23.722 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.722 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.722 [2024-11-27 14:13:00.760168] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:23.722 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.722 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:23.722 "name": "Existed_Raid", 00:13:23.722 "aliases": [ 00:13:23.722 "56f4f961-2f65-4223-aa3d-c781794dcc84" 00:13:23.722 ], 00:13:23.722 "product_name": "Raid Volume", 00:13:23.722 "block_size": 512, 00:13:23.722 "num_blocks": 65536, 00:13:23.722 "uuid": "56f4f961-2f65-4223-aa3d-c781794dcc84", 00:13:23.722 "assigned_rate_limits": { 00:13:23.722 "rw_ios_per_sec": 0, 00:13:23.722 "rw_mbytes_per_sec": 0, 00:13:23.722 "r_mbytes_per_sec": 0, 00:13:23.722 "w_mbytes_per_sec": 0 00:13:23.722 }, 00:13:23.722 "claimed": false, 00:13:23.722 "zoned": false, 00:13:23.722 "supported_io_types": { 00:13:23.722 "read": true, 00:13:23.722 "write": true, 00:13:23.722 "unmap": false, 00:13:23.722 "flush": false, 00:13:23.722 "reset": true, 00:13:23.722 "nvme_admin": false, 00:13:23.722 "nvme_io": false, 00:13:23.722 "nvme_io_md": false, 00:13:23.722 "write_zeroes": true, 00:13:23.722 "zcopy": false, 00:13:23.722 "get_zone_info": false, 00:13:23.722 "zone_management": false, 00:13:23.722 "zone_append": false, 00:13:23.722 "compare": false, 00:13:23.722 "compare_and_write": false, 00:13:23.722 "abort": false, 00:13:23.722 "seek_hole": false, 00:13:23.722 "seek_data": false, 00:13:23.722 "copy": false, 00:13:23.722 "nvme_iov_md": false 00:13:23.722 }, 00:13:23.722 "memory_domains": [ 00:13:23.722 { 00:13:23.722 "dma_device_id": "system", 00:13:23.722 "dma_device_type": 1 00:13:23.722 }, 00:13:23.722 { 00:13:23.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.722 "dma_device_type": 2 00:13:23.722 }, 00:13:23.722 { 00:13:23.722 "dma_device_id": "system", 00:13:23.722 "dma_device_type": 1 00:13:23.722 }, 00:13:23.722 { 00:13:23.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.722 "dma_device_type": 2 00:13:23.722 }, 00:13:23.722 { 00:13:23.722 "dma_device_id": "system", 00:13:23.722 "dma_device_type": 1 00:13:23.722 }, 00:13:23.722 { 00:13:23.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.722 "dma_device_type": 2 00:13:23.722 }, 00:13:23.722 { 00:13:23.722 "dma_device_id": "system", 00:13:23.722 "dma_device_type": 1 00:13:23.722 }, 00:13:23.722 { 00:13:23.722 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.722 "dma_device_type": 2 00:13:23.722 } 00:13:23.722 ], 00:13:23.722 "driver_specific": { 00:13:23.722 "raid": { 00:13:23.722 "uuid": "56f4f961-2f65-4223-aa3d-c781794dcc84", 00:13:23.722 "strip_size_kb": 0, 00:13:23.722 "state": "online", 00:13:23.722 "raid_level": "raid1", 00:13:23.722 "superblock": false, 00:13:23.722 "num_base_bdevs": 4, 00:13:23.722 "num_base_bdevs_discovered": 4, 00:13:23.722 "num_base_bdevs_operational": 4, 00:13:23.722 "base_bdevs_list": [ 00:13:23.722 { 00:13:23.722 "name": "BaseBdev1", 00:13:23.722 "uuid": "5bfbdf59-6066-4a34-af40-89e0d47d65f9", 00:13:23.722 "is_configured": true, 00:13:23.722 "data_offset": 0, 00:13:23.722 "data_size": 65536 00:13:23.722 }, 00:13:23.722 { 00:13:23.722 "name": "BaseBdev2", 00:13:23.722 "uuid": "9c0b8bd5-1adc-4d27-90e6-2215fe92d186", 00:13:23.722 "is_configured": true, 00:13:23.722 "data_offset": 0, 00:13:23.722 "data_size": 65536 00:13:23.722 }, 00:13:23.722 { 00:13:23.722 "name": "BaseBdev3", 00:13:23.722 "uuid": "72a59c2b-06ee-4adc-8db1-560eb2ad178f", 00:13:23.722 "is_configured": true, 00:13:23.722 "data_offset": 0, 00:13:23.722 "data_size": 65536 00:13:23.722 }, 00:13:23.722 { 00:13:23.722 "name": "BaseBdev4", 00:13:23.722 "uuid": "19037425-756e-41b1-b506-dd1c4ee498e0", 00:13:23.722 "is_configured": true, 00:13:23.722 "data_offset": 0, 00:13:23.722 "data_size": 65536 00:13:23.722 } 00:13:23.722 ] 00:13:23.722 } 00:13:23.722 } 00:13:23.722 }' 00:13:23.722 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:23.722 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:23.722 BaseBdev2 00:13:23.722 BaseBdev3 00:13:23.722 BaseBdev4' 00:13:23.722 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:23.722 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:23.722 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:23.722 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:23.723 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:23.723 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.723 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.723 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.723 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:23.723 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:23.723 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:23.723 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:23.723 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.723 14:13:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:23.723 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.723 14:13:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.981 [2024-11-27 14:13:01.127929] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:23.981 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.240 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:24.240 "name": "Existed_Raid", 00:13:24.240 "uuid": "56f4f961-2f65-4223-aa3d-c781794dcc84", 00:13:24.240 "strip_size_kb": 0, 00:13:24.240 "state": "online", 00:13:24.240 "raid_level": "raid1", 00:13:24.240 "superblock": false, 00:13:24.240 "num_base_bdevs": 4, 00:13:24.240 "num_base_bdevs_discovered": 3, 00:13:24.240 "num_base_bdevs_operational": 3, 00:13:24.240 "base_bdevs_list": [ 00:13:24.240 { 00:13:24.240 "name": null, 00:13:24.240 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:24.240 "is_configured": false, 00:13:24.240 "data_offset": 0, 00:13:24.240 "data_size": 65536 00:13:24.240 }, 00:13:24.240 { 00:13:24.240 "name": "BaseBdev2", 00:13:24.240 "uuid": "9c0b8bd5-1adc-4d27-90e6-2215fe92d186", 00:13:24.240 "is_configured": true, 00:13:24.240 "data_offset": 0, 00:13:24.240 "data_size": 65536 00:13:24.240 }, 00:13:24.240 { 00:13:24.240 "name": "BaseBdev3", 00:13:24.240 "uuid": "72a59c2b-06ee-4adc-8db1-560eb2ad178f", 00:13:24.240 "is_configured": true, 00:13:24.240 "data_offset": 0, 00:13:24.240 "data_size": 65536 00:13:24.240 }, 00:13:24.240 { 00:13:24.240 "name": "BaseBdev4", 00:13:24.240 "uuid": "19037425-756e-41b1-b506-dd1c4ee498e0", 00:13:24.240 "is_configured": true, 00:13:24.240 "data_offset": 0, 00:13:24.240 "data_size": 65536 00:13:24.240 } 00:13:24.240 ] 00:13:24.240 }' 00:13:24.240 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:24.240 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.500 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:24.500 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:24.500 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:24.500 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.500 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.500 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.500 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.759 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:24.759 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:24.759 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:24.759 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.759 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.759 [2024-11-27 14:13:01.802570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:24.760 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.760 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:24.760 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:24.760 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:24.760 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:24.760 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.760 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.760 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.760 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:24.760 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:24.760 14:13:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:24.760 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:24.760 14:13:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:24.760 [2024-11-27 14:13:01.950041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:24.760 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:24.760 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:24.760 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.019 [2024-11-27 14:13:02.096570] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:25.019 [2024-11-27 14:13:02.096704] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:25.019 [2024-11-27 14:13:02.185497] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:25.019 [2024-11-27 14:13:02.185574] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:25.019 [2024-11-27 14:13:02.185594] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.019 BaseBdev2 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.019 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.280 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.280 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:25.280 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.280 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.280 [ 00:13:25.280 { 00:13:25.280 "name": "BaseBdev2", 00:13:25.280 "aliases": [ 00:13:25.280 "17cff1f1-5efb-4f09-a6c6-9311e28c8bbd" 00:13:25.280 ], 00:13:25.280 "product_name": "Malloc disk", 00:13:25.280 "block_size": 512, 00:13:25.280 "num_blocks": 65536, 00:13:25.280 "uuid": "17cff1f1-5efb-4f09-a6c6-9311e28c8bbd", 00:13:25.280 "assigned_rate_limits": { 00:13:25.280 "rw_ios_per_sec": 0, 00:13:25.280 "rw_mbytes_per_sec": 0, 00:13:25.280 "r_mbytes_per_sec": 0, 00:13:25.280 "w_mbytes_per_sec": 0 00:13:25.280 }, 00:13:25.280 "claimed": false, 00:13:25.280 "zoned": false, 00:13:25.280 "supported_io_types": { 00:13:25.280 "read": true, 00:13:25.280 "write": true, 00:13:25.280 "unmap": true, 00:13:25.280 "flush": true, 00:13:25.280 "reset": true, 00:13:25.280 "nvme_admin": false, 00:13:25.280 "nvme_io": false, 00:13:25.280 "nvme_io_md": false, 00:13:25.280 "write_zeroes": true, 00:13:25.280 "zcopy": true, 00:13:25.280 "get_zone_info": false, 00:13:25.280 "zone_management": false, 00:13:25.280 "zone_append": false, 00:13:25.280 "compare": false, 00:13:25.280 "compare_and_write": false, 00:13:25.280 "abort": true, 00:13:25.280 "seek_hole": false, 00:13:25.280 "seek_data": false, 00:13:25.280 "copy": true, 00:13:25.280 "nvme_iov_md": false 00:13:25.280 }, 00:13:25.280 "memory_domains": [ 00:13:25.280 { 00:13:25.280 "dma_device_id": "system", 00:13:25.280 "dma_device_type": 1 00:13:25.280 }, 00:13:25.280 { 00:13:25.280 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.280 "dma_device_type": 2 00:13:25.280 } 00:13:25.280 ], 00:13:25.280 "driver_specific": {} 00:13:25.280 } 00:13:25.280 ] 00:13:25.280 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.280 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:25.280 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:25.280 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:25.280 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:25.280 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.280 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.280 BaseBdev3 00:13:25.280 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.280 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:25.280 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:25.280 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:25.280 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:25.280 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:25.280 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:25.280 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:25.280 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.280 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.280 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.280 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:25.280 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.280 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.280 [ 00:13:25.280 { 00:13:25.280 "name": "BaseBdev3", 00:13:25.280 "aliases": [ 00:13:25.280 "2eb54851-bc9e-48d3-97e4-6fd2f3189970" 00:13:25.280 ], 00:13:25.280 "product_name": "Malloc disk", 00:13:25.280 "block_size": 512, 00:13:25.280 "num_blocks": 65536, 00:13:25.280 "uuid": "2eb54851-bc9e-48d3-97e4-6fd2f3189970", 00:13:25.280 "assigned_rate_limits": { 00:13:25.280 "rw_ios_per_sec": 0, 00:13:25.280 "rw_mbytes_per_sec": 0, 00:13:25.280 "r_mbytes_per_sec": 0, 00:13:25.280 "w_mbytes_per_sec": 0 00:13:25.280 }, 00:13:25.280 "claimed": false, 00:13:25.280 "zoned": false, 00:13:25.280 "supported_io_types": { 00:13:25.280 "read": true, 00:13:25.280 "write": true, 00:13:25.280 "unmap": true, 00:13:25.280 "flush": true, 00:13:25.280 "reset": true, 00:13:25.280 "nvme_admin": false, 00:13:25.280 "nvme_io": false, 00:13:25.280 "nvme_io_md": false, 00:13:25.280 "write_zeroes": true, 00:13:25.281 "zcopy": true, 00:13:25.281 "get_zone_info": false, 00:13:25.281 "zone_management": false, 00:13:25.281 "zone_append": false, 00:13:25.281 "compare": false, 00:13:25.281 "compare_and_write": false, 00:13:25.281 "abort": true, 00:13:25.281 "seek_hole": false, 00:13:25.281 "seek_data": false, 00:13:25.281 "copy": true, 00:13:25.281 "nvme_iov_md": false 00:13:25.281 }, 00:13:25.281 "memory_domains": [ 00:13:25.281 { 00:13:25.281 "dma_device_id": "system", 00:13:25.281 "dma_device_type": 1 00:13:25.281 }, 00:13:25.281 { 00:13:25.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.281 "dma_device_type": 2 00:13:25.281 } 00:13:25.281 ], 00:13:25.281 "driver_specific": {} 00:13:25.281 } 00:13:25.281 ] 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.281 BaseBdev4 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.281 [ 00:13:25.281 { 00:13:25.281 "name": "BaseBdev4", 00:13:25.281 "aliases": [ 00:13:25.281 "d88f4d03-749d-45e1-b493-2c7360bca706" 00:13:25.281 ], 00:13:25.281 "product_name": "Malloc disk", 00:13:25.281 "block_size": 512, 00:13:25.281 "num_blocks": 65536, 00:13:25.281 "uuid": "d88f4d03-749d-45e1-b493-2c7360bca706", 00:13:25.281 "assigned_rate_limits": { 00:13:25.281 "rw_ios_per_sec": 0, 00:13:25.281 "rw_mbytes_per_sec": 0, 00:13:25.281 "r_mbytes_per_sec": 0, 00:13:25.281 "w_mbytes_per_sec": 0 00:13:25.281 }, 00:13:25.281 "claimed": false, 00:13:25.281 "zoned": false, 00:13:25.281 "supported_io_types": { 00:13:25.281 "read": true, 00:13:25.281 "write": true, 00:13:25.281 "unmap": true, 00:13:25.281 "flush": true, 00:13:25.281 "reset": true, 00:13:25.281 "nvme_admin": false, 00:13:25.281 "nvme_io": false, 00:13:25.281 "nvme_io_md": false, 00:13:25.281 "write_zeroes": true, 00:13:25.281 "zcopy": true, 00:13:25.281 "get_zone_info": false, 00:13:25.281 "zone_management": false, 00:13:25.281 "zone_append": false, 00:13:25.281 "compare": false, 00:13:25.281 "compare_and_write": false, 00:13:25.281 "abort": true, 00:13:25.281 "seek_hole": false, 00:13:25.281 "seek_data": false, 00:13:25.281 "copy": true, 00:13:25.281 "nvme_iov_md": false 00:13:25.281 }, 00:13:25.281 "memory_domains": [ 00:13:25.281 { 00:13:25.281 "dma_device_id": "system", 00:13:25.281 "dma_device_type": 1 00:13:25.281 }, 00:13:25.281 { 00:13:25.281 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:25.281 "dma_device_type": 2 00:13:25.281 } 00:13:25.281 ], 00:13:25.281 "driver_specific": {} 00:13:25.281 } 00:13:25.281 ] 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.281 [2024-11-27 14:13:02.474737] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:25.281 [2024-11-27 14:13:02.474808] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:25.281 [2024-11-27 14:13:02.474839] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:25.281 [2024-11-27 14:13:02.477257] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:25.281 [2024-11-27 14:13:02.477324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.281 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.281 "name": "Existed_Raid", 00:13:25.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.281 "strip_size_kb": 0, 00:13:25.281 "state": "configuring", 00:13:25.282 "raid_level": "raid1", 00:13:25.282 "superblock": false, 00:13:25.282 "num_base_bdevs": 4, 00:13:25.282 "num_base_bdevs_discovered": 3, 00:13:25.282 "num_base_bdevs_operational": 4, 00:13:25.282 "base_bdevs_list": [ 00:13:25.282 { 00:13:25.282 "name": "BaseBdev1", 00:13:25.282 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.282 "is_configured": false, 00:13:25.282 "data_offset": 0, 00:13:25.282 "data_size": 0 00:13:25.282 }, 00:13:25.282 { 00:13:25.282 "name": "BaseBdev2", 00:13:25.282 "uuid": "17cff1f1-5efb-4f09-a6c6-9311e28c8bbd", 00:13:25.282 "is_configured": true, 00:13:25.282 "data_offset": 0, 00:13:25.282 "data_size": 65536 00:13:25.282 }, 00:13:25.282 { 00:13:25.282 "name": "BaseBdev3", 00:13:25.282 "uuid": "2eb54851-bc9e-48d3-97e4-6fd2f3189970", 00:13:25.282 "is_configured": true, 00:13:25.282 "data_offset": 0, 00:13:25.282 "data_size": 65536 00:13:25.282 }, 00:13:25.282 { 00:13:25.282 "name": "BaseBdev4", 00:13:25.282 "uuid": "d88f4d03-749d-45e1-b493-2c7360bca706", 00:13:25.282 "is_configured": true, 00:13:25.282 "data_offset": 0, 00:13:25.282 "data_size": 65536 00:13:25.282 } 00:13:25.282 ] 00:13:25.282 }' 00:13:25.282 14:13:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.282 14:13:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.909 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:25.909 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.909 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.909 [2024-11-27 14:13:03.022936] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:25.909 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.909 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:25.909 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:25.909 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:25.909 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:25.909 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:25.910 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:25.910 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:25.910 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:25.910 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:25.910 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:25.910 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.910 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:25.910 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:25.910 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:25.910 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:25.910 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:25.910 "name": "Existed_Raid", 00:13:25.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.910 "strip_size_kb": 0, 00:13:25.910 "state": "configuring", 00:13:25.910 "raid_level": "raid1", 00:13:25.910 "superblock": false, 00:13:25.910 "num_base_bdevs": 4, 00:13:25.910 "num_base_bdevs_discovered": 2, 00:13:25.910 "num_base_bdevs_operational": 4, 00:13:25.910 "base_bdevs_list": [ 00:13:25.910 { 00:13:25.910 "name": "BaseBdev1", 00:13:25.910 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.910 "is_configured": false, 00:13:25.910 "data_offset": 0, 00:13:25.910 "data_size": 0 00:13:25.910 }, 00:13:25.910 { 00:13:25.910 "name": null, 00:13:25.910 "uuid": "17cff1f1-5efb-4f09-a6c6-9311e28c8bbd", 00:13:25.910 "is_configured": false, 00:13:25.910 "data_offset": 0, 00:13:25.910 "data_size": 65536 00:13:25.910 }, 00:13:25.910 { 00:13:25.910 "name": "BaseBdev3", 00:13:25.910 "uuid": "2eb54851-bc9e-48d3-97e4-6fd2f3189970", 00:13:25.910 "is_configured": true, 00:13:25.910 "data_offset": 0, 00:13:25.910 "data_size": 65536 00:13:25.910 }, 00:13:25.910 { 00:13:25.910 "name": "BaseBdev4", 00:13:25.910 "uuid": "d88f4d03-749d-45e1-b493-2c7360bca706", 00:13:25.910 "is_configured": true, 00:13:25.910 "data_offset": 0, 00:13:25.910 "data_size": 65536 00:13:25.910 } 00:13:25.910 ] 00:13:25.910 }' 00:13:25.910 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:25.910 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.495 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.495 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.496 [2024-11-27 14:13:03.617549] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:26.496 BaseBdev1 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.496 [ 00:13:26.496 { 00:13:26.496 "name": "BaseBdev1", 00:13:26.496 "aliases": [ 00:13:26.496 "ddcc0c25-17d5-498a-9bb1-2edb8e66f49d" 00:13:26.496 ], 00:13:26.496 "product_name": "Malloc disk", 00:13:26.496 "block_size": 512, 00:13:26.496 "num_blocks": 65536, 00:13:26.496 "uuid": "ddcc0c25-17d5-498a-9bb1-2edb8e66f49d", 00:13:26.496 "assigned_rate_limits": { 00:13:26.496 "rw_ios_per_sec": 0, 00:13:26.496 "rw_mbytes_per_sec": 0, 00:13:26.496 "r_mbytes_per_sec": 0, 00:13:26.496 "w_mbytes_per_sec": 0 00:13:26.496 }, 00:13:26.496 "claimed": true, 00:13:26.496 "claim_type": "exclusive_write", 00:13:26.496 "zoned": false, 00:13:26.496 "supported_io_types": { 00:13:26.496 "read": true, 00:13:26.496 "write": true, 00:13:26.496 "unmap": true, 00:13:26.496 "flush": true, 00:13:26.496 "reset": true, 00:13:26.496 "nvme_admin": false, 00:13:26.496 "nvme_io": false, 00:13:26.496 "nvme_io_md": false, 00:13:26.496 "write_zeroes": true, 00:13:26.496 "zcopy": true, 00:13:26.496 "get_zone_info": false, 00:13:26.496 "zone_management": false, 00:13:26.496 "zone_append": false, 00:13:26.496 "compare": false, 00:13:26.496 "compare_and_write": false, 00:13:26.496 "abort": true, 00:13:26.496 "seek_hole": false, 00:13:26.496 "seek_data": false, 00:13:26.496 "copy": true, 00:13:26.496 "nvme_iov_md": false 00:13:26.496 }, 00:13:26.496 "memory_domains": [ 00:13:26.496 { 00:13:26.496 "dma_device_id": "system", 00:13:26.496 "dma_device_type": 1 00:13:26.496 }, 00:13:26.496 { 00:13:26.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:26.496 "dma_device_type": 2 00:13:26.496 } 00:13:26.496 ], 00:13:26.496 "driver_specific": {} 00:13:26.496 } 00:13:26.496 ] 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:26.496 "name": "Existed_Raid", 00:13:26.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:26.496 "strip_size_kb": 0, 00:13:26.496 "state": "configuring", 00:13:26.496 "raid_level": "raid1", 00:13:26.496 "superblock": false, 00:13:26.496 "num_base_bdevs": 4, 00:13:26.496 "num_base_bdevs_discovered": 3, 00:13:26.496 "num_base_bdevs_operational": 4, 00:13:26.496 "base_bdevs_list": [ 00:13:26.496 { 00:13:26.496 "name": "BaseBdev1", 00:13:26.496 "uuid": "ddcc0c25-17d5-498a-9bb1-2edb8e66f49d", 00:13:26.496 "is_configured": true, 00:13:26.496 "data_offset": 0, 00:13:26.496 "data_size": 65536 00:13:26.496 }, 00:13:26.496 { 00:13:26.496 "name": null, 00:13:26.496 "uuid": "17cff1f1-5efb-4f09-a6c6-9311e28c8bbd", 00:13:26.496 "is_configured": false, 00:13:26.496 "data_offset": 0, 00:13:26.496 "data_size": 65536 00:13:26.496 }, 00:13:26.496 { 00:13:26.496 "name": "BaseBdev3", 00:13:26.496 "uuid": "2eb54851-bc9e-48d3-97e4-6fd2f3189970", 00:13:26.496 "is_configured": true, 00:13:26.496 "data_offset": 0, 00:13:26.496 "data_size": 65536 00:13:26.496 }, 00:13:26.496 { 00:13:26.496 "name": "BaseBdev4", 00:13:26.496 "uuid": "d88f4d03-749d-45e1-b493-2c7360bca706", 00:13:26.496 "is_configured": true, 00:13:26.496 "data_offset": 0, 00:13:26.496 "data_size": 65536 00:13:26.496 } 00:13:26.496 ] 00:13:26.496 }' 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:26.496 14:13:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.064 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.064 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:27.064 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.064 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.064 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.064 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:27.064 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:27.064 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.064 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.064 [2024-11-27 14:13:04.173765] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:27.064 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.064 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:27.064 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.064 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.064 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.064 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.064 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:27.064 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.064 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.064 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.064 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.064 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.064 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.064 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.064 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.064 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.064 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.064 "name": "Existed_Raid", 00:13:27.064 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.064 "strip_size_kb": 0, 00:13:27.064 "state": "configuring", 00:13:27.064 "raid_level": "raid1", 00:13:27.064 "superblock": false, 00:13:27.064 "num_base_bdevs": 4, 00:13:27.064 "num_base_bdevs_discovered": 2, 00:13:27.064 "num_base_bdevs_operational": 4, 00:13:27.064 "base_bdevs_list": [ 00:13:27.064 { 00:13:27.064 "name": "BaseBdev1", 00:13:27.064 "uuid": "ddcc0c25-17d5-498a-9bb1-2edb8e66f49d", 00:13:27.064 "is_configured": true, 00:13:27.064 "data_offset": 0, 00:13:27.064 "data_size": 65536 00:13:27.064 }, 00:13:27.064 { 00:13:27.064 "name": null, 00:13:27.064 "uuid": "17cff1f1-5efb-4f09-a6c6-9311e28c8bbd", 00:13:27.064 "is_configured": false, 00:13:27.064 "data_offset": 0, 00:13:27.064 "data_size": 65536 00:13:27.064 }, 00:13:27.064 { 00:13:27.064 "name": null, 00:13:27.064 "uuid": "2eb54851-bc9e-48d3-97e4-6fd2f3189970", 00:13:27.065 "is_configured": false, 00:13:27.065 "data_offset": 0, 00:13:27.065 "data_size": 65536 00:13:27.065 }, 00:13:27.065 { 00:13:27.065 "name": "BaseBdev4", 00:13:27.065 "uuid": "d88f4d03-749d-45e1-b493-2c7360bca706", 00:13:27.065 "is_configured": true, 00:13:27.065 "data_offset": 0, 00:13:27.065 "data_size": 65536 00:13:27.065 } 00:13:27.065 ] 00:13:27.065 }' 00:13:27.065 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.065 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.633 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:27.633 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.633 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.633 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.633 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.633 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:27.633 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:27.633 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.633 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.633 [2024-11-27 14:13:04.729932] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:27.633 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.633 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:27.633 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:27.633 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:27.633 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:27.633 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:27.633 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:27.633 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:27.633 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:27.633 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:27.633 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:27.633 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:27.633 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:27.633 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:27.633 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:27.633 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:27.633 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:27.633 "name": "Existed_Raid", 00:13:27.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:27.633 "strip_size_kb": 0, 00:13:27.633 "state": "configuring", 00:13:27.633 "raid_level": "raid1", 00:13:27.633 "superblock": false, 00:13:27.633 "num_base_bdevs": 4, 00:13:27.633 "num_base_bdevs_discovered": 3, 00:13:27.633 "num_base_bdevs_operational": 4, 00:13:27.633 "base_bdevs_list": [ 00:13:27.633 { 00:13:27.633 "name": "BaseBdev1", 00:13:27.633 "uuid": "ddcc0c25-17d5-498a-9bb1-2edb8e66f49d", 00:13:27.633 "is_configured": true, 00:13:27.633 "data_offset": 0, 00:13:27.633 "data_size": 65536 00:13:27.633 }, 00:13:27.633 { 00:13:27.633 "name": null, 00:13:27.633 "uuid": "17cff1f1-5efb-4f09-a6c6-9311e28c8bbd", 00:13:27.633 "is_configured": false, 00:13:27.633 "data_offset": 0, 00:13:27.633 "data_size": 65536 00:13:27.633 }, 00:13:27.633 { 00:13:27.633 "name": "BaseBdev3", 00:13:27.633 "uuid": "2eb54851-bc9e-48d3-97e4-6fd2f3189970", 00:13:27.633 "is_configured": true, 00:13:27.633 "data_offset": 0, 00:13:27.633 "data_size": 65536 00:13:27.633 }, 00:13:27.633 { 00:13:27.633 "name": "BaseBdev4", 00:13:27.633 "uuid": "d88f4d03-749d-45e1-b493-2c7360bca706", 00:13:27.633 "is_configured": true, 00:13:27.633 "data_offset": 0, 00:13:27.633 "data_size": 65536 00:13:27.633 } 00:13:27.633 ] 00:13:27.633 }' 00:13:27.633 14:13:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:27.633 14:13:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.200 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.200 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:28.200 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.200 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.200 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.200 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:28.200 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:28.200 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.200 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.200 [2024-11-27 14:13:05.306137] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:28.200 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.200 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:28.200 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.200 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.200 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.200 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.200 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:28.200 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.200 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.200 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.200 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.200 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.201 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.201 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.201 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.201 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.201 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.201 "name": "Existed_Raid", 00:13:28.201 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.201 "strip_size_kb": 0, 00:13:28.201 "state": "configuring", 00:13:28.201 "raid_level": "raid1", 00:13:28.201 "superblock": false, 00:13:28.201 "num_base_bdevs": 4, 00:13:28.201 "num_base_bdevs_discovered": 2, 00:13:28.201 "num_base_bdevs_operational": 4, 00:13:28.201 "base_bdevs_list": [ 00:13:28.201 { 00:13:28.201 "name": null, 00:13:28.201 "uuid": "ddcc0c25-17d5-498a-9bb1-2edb8e66f49d", 00:13:28.201 "is_configured": false, 00:13:28.201 "data_offset": 0, 00:13:28.201 "data_size": 65536 00:13:28.201 }, 00:13:28.201 { 00:13:28.201 "name": null, 00:13:28.201 "uuid": "17cff1f1-5efb-4f09-a6c6-9311e28c8bbd", 00:13:28.201 "is_configured": false, 00:13:28.201 "data_offset": 0, 00:13:28.201 "data_size": 65536 00:13:28.201 }, 00:13:28.201 { 00:13:28.201 "name": "BaseBdev3", 00:13:28.201 "uuid": "2eb54851-bc9e-48d3-97e4-6fd2f3189970", 00:13:28.201 "is_configured": true, 00:13:28.201 "data_offset": 0, 00:13:28.201 "data_size": 65536 00:13:28.201 }, 00:13:28.201 { 00:13:28.201 "name": "BaseBdev4", 00:13:28.201 "uuid": "d88f4d03-749d-45e1-b493-2c7360bca706", 00:13:28.201 "is_configured": true, 00:13:28.201 "data_offset": 0, 00:13:28.201 "data_size": 65536 00:13:28.201 } 00:13:28.201 ] 00:13:28.201 }' 00:13:28.201 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.201 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.797 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:28.797 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.797 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.797 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.797 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.797 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:28.797 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:28.797 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.797 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.797 [2024-11-27 14:13:05.969659] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:28.797 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.797 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:28.797 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:28.797 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:28.797 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:28.797 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:28.797 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:28.797 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:28.797 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:28.797 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:28.797 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:28.797 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:28.797 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.797 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:28.797 14:13:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:28.797 14:13:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.797 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:28.797 "name": "Existed_Raid", 00:13:28.797 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:28.797 "strip_size_kb": 0, 00:13:28.797 "state": "configuring", 00:13:28.797 "raid_level": "raid1", 00:13:28.797 "superblock": false, 00:13:28.797 "num_base_bdevs": 4, 00:13:28.797 "num_base_bdevs_discovered": 3, 00:13:28.797 "num_base_bdevs_operational": 4, 00:13:28.797 "base_bdevs_list": [ 00:13:28.797 { 00:13:28.797 "name": null, 00:13:28.797 "uuid": "ddcc0c25-17d5-498a-9bb1-2edb8e66f49d", 00:13:28.797 "is_configured": false, 00:13:28.797 "data_offset": 0, 00:13:28.797 "data_size": 65536 00:13:28.797 }, 00:13:28.797 { 00:13:28.797 "name": "BaseBdev2", 00:13:28.797 "uuid": "17cff1f1-5efb-4f09-a6c6-9311e28c8bbd", 00:13:28.797 "is_configured": true, 00:13:28.797 "data_offset": 0, 00:13:28.797 "data_size": 65536 00:13:28.797 }, 00:13:28.797 { 00:13:28.797 "name": "BaseBdev3", 00:13:28.797 "uuid": "2eb54851-bc9e-48d3-97e4-6fd2f3189970", 00:13:28.797 "is_configured": true, 00:13:28.797 "data_offset": 0, 00:13:28.797 "data_size": 65536 00:13:28.797 }, 00:13:28.797 { 00:13:28.797 "name": "BaseBdev4", 00:13:28.797 "uuid": "d88f4d03-749d-45e1-b493-2c7360bca706", 00:13:28.797 "is_configured": true, 00:13:28.797 "data_offset": 0, 00:13:28.797 "data_size": 65536 00:13:28.797 } 00:13:28.797 ] 00:13:28.797 }' 00:13:28.797 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:28.797 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.364 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.364 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:29.364 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.364 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.364 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.364 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:29.364 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.364 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:29.364 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.364 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.364 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.364 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ddcc0c25-17d5-498a-9bb1-2edb8e66f49d 00:13:29.364 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.364 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.364 [2024-11-27 14:13:06.627070] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:29.364 [2024-11-27 14:13:06.627466] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:29.364 [2024-11-27 14:13:06.627495] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:13:29.364 [2024-11-27 14:13:06.627891] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:29.364 [2024-11-27 14:13:06.628118] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:29.364 [2024-11-27 14:13:06.628134] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:29.364 [2024-11-27 14:13:06.628483] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:29.364 NewBaseBdev 00:13:29.364 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.364 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:29.364 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:29.364 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:29.364 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # local i 00:13:29.364 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:29.364 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:29.364 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:29.364 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.364 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.364 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.364 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:29.364 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.364 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.623 [ 00:13:29.623 { 00:13:29.623 "name": "NewBaseBdev", 00:13:29.623 "aliases": [ 00:13:29.623 "ddcc0c25-17d5-498a-9bb1-2edb8e66f49d" 00:13:29.623 ], 00:13:29.623 "product_name": "Malloc disk", 00:13:29.623 "block_size": 512, 00:13:29.623 "num_blocks": 65536, 00:13:29.623 "uuid": "ddcc0c25-17d5-498a-9bb1-2edb8e66f49d", 00:13:29.623 "assigned_rate_limits": { 00:13:29.623 "rw_ios_per_sec": 0, 00:13:29.623 "rw_mbytes_per_sec": 0, 00:13:29.623 "r_mbytes_per_sec": 0, 00:13:29.623 "w_mbytes_per_sec": 0 00:13:29.623 }, 00:13:29.623 "claimed": true, 00:13:29.623 "claim_type": "exclusive_write", 00:13:29.623 "zoned": false, 00:13:29.623 "supported_io_types": { 00:13:29.623 "read": true, 00:13:29.623 "write": true, 00:13:29.623 "unmap": true, 00:13:29.623 "flush": true, 00:13:29.623 "reset": true, 00:13:29.623 "nvme_admin": false, 00:13:29.623 "nvme_io": false, 00:13:29.623 "nvme_io_md": false, 00:13:29.623 "write_zeroes": true, 00:13:29.623 "zcopy": true, 00:13:29.623 "get_zone_info": false, 00:13:29.623 "zone_management": false, 00:13:29.623 "zone_append": false, 00:13:29.623 "compare": false, 00:13:29.623 "compare_and_write": false, 00:13:29.623 "abort": true, 00:13:29.623 "seek_hole": false, 00:13:29.623 "seek_data": false, 00:13:29.623 "copy": true, 00:13:29.623 "nvme_iov_md": false 00:13:29.623 }, 00:13:29.623 "memory_domains": [ 00:13:29.623 { 00:13:29.623 "dma_device_id": "system", 00:13:29.623 "dma_device_type": 1 00:13:29.623 }, 00:13:29.623 { 00:13:29.623 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:29.623 "dma_device_type": 2 00:13:29.623 } 00:13:29.623 ], 00:13:29.623 "driver_specific": {} 00:13:29.623 } 00:13:29.623 ] 00:13:29.623 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.623 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:13:29.623 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:29.623 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:29.623 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:29.623 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:29.623 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:29.623 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:29.623 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:29.623 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:29.623 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:29.623 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:29.623 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:29.623 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.623 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:29.623 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:29.623 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:29.623 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:29.623 "name": "Existed_Raid", 00:13:29.623 "uuid": "97b0bc5c-997c-4d57-80c5-90891b27f910", 00:13:29.623 "strip_size_kb": 0, 00:13:29.623 "state": "online", 00:13:29.623 "raid_level": "raid1", 00:13:29.623 "superblock": false, 00:13:29.623 "num_base_bdevs": 4, 00:13:29.623 "num_base_bdevs_discovered": 4, 00:13:29.623 "num_base_bdevs_operational": 4, 00:13:29.623 "base_bdevs_list": [ 00:13:29.623 { 00:13:29.623 "name": "NewBaseBdev", 00:13:29.623 "uuid": "ddcc0c25-17d5-498a-9bb1-2edb8e66f49d", 00:13:29.623 "is_configured": true, 00:13:29.623 "data_offset": 0, 00:13:29.623 "data_size": 65536 00:13:29.623 }, 00:13:29.623 { 00:13:29.623 "name": "BaseBdev2", 00:13:29.623 "uuid": "17cff1f1-5efb-4f09-a6c6-9311e28c8bbd", 00:13:29.623 "is_configured": true, 00:13:29.623 "data_offset": 0, 00:13:29.623 "data_size": 65536 00:13:29.623 }, 00:13:29.623 { 00:13:29.623 "name": "BaseBdev3", 00:13:29.623 "uuid": "2eb54851-bc9e-48d3-97e4-6fd2f3189970", 00:13:29.623 "is_configured": true, 00:13:29.623 "data_offset": 0, 00:13:29.623 "data_size": 65536 00:13:29.623 }, 00:13:29.623 { 00:13:29.623 "name": "BaseBdev4", 00:13:29.623 "uuid": "d88f4d03-749d-45e1-b493-2c7360bca706", 00:13:29.623 "is_configured": true, 00:13:29.623 "data_offset": 0, 00:13:29.623 "data_size": 65536 00:13:29.623 } 00:13:29.623 ] 00:13:29.623 }' 00:13:29.623 14:13:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:29.623 14:13:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.254 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:30.254 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:30.254 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:30.254 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:30.254 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:30.254 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:30.254 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:30.254 14:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.254 14:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.254 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:30.254 [2024-11-27 14:13:07.187792] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:30.254 14:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.254 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:30.254 "name": "Existed_Raid", 00:13:30.254 "aliases": [ 00:13:30.254 "97b0bc5c-997c-4d57-80c5-90891b27f910" 00:13:30.254 ], 00:13:30.254 "product_name": "Raid Volume", 00:13:30.254 "block_size": 512, 00:13:30.254 "num_blocks": 65536, 00:13:30.254 "uuid": "97b0bc5c-997c-4d57-80c5-90891b27f910", 00:13:30.254 "assigned_rate_limits": { 00:13:30.254 "rw_ios_per_sec": 0, 00:13:30.254 "rw_mbytes_per_sec": 0, 00:13:30.254 "r_mbytes_per_sec": 0, 00:13:30.254 "w_mbytes_per_sec": 0 00:13:30.254 }, 00:13:30.254 "claimed": false, 00:13:30.254 "zoned": false, 00:13:30.254 "supported_io_types": { 00:13:30.254 "read": true, 00:13:30.254 "write": true, 00:13:30.254 "unmap": false, 00:13:30.254 "flush": false, 00:13:30.254 "reset": true, 00:13:30.254 "nvme_admin": false, 00:13:30.254 "nvme_io": false, 00:13:30.254 "nvme_io_md": false, 00:13:30.254 "write_zeroes": true, 00:13:30.254 "zcopy": false, 00:13:30.254 "get_zone_info": false, 00:13:30.254 "zone_management": false, 00:13:30.254 "zone_append": false, 00:13:30.254 "compare": false, 00:13:30.254 "compare_and_write": false, 00:13:30.255 "abort": false, 00:13:30.255 "seek_hole": false, 00:13:30.255 "seek_data": false, 00:13:30.255 "copy": false, 00:13:30.255 "nvme_iov_md": false 00:13:30.255 }, 00:13:30.255 "memory_domains": [ 00:13:30.255 { 00:13:30.255 "dma_device_id": "system", 00:13:30.255 "dma_device_type": 1 00:13:30.255 }, 00:13:30.255 { 00:13:30.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.255 "dma_device_type": 2 00:13:30.255 }, 00:13:30.255 { 00:13:30.255 "dma_device_id": "system", 00:13:30.255 "dma_device_type": 1 00:13:30.255 }, 00:13:30.255 { 00:13:30.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.255 "dma_device_type": 2 00:13:30.255 }, 00:13:30.255 { 00:13:30.255 "dma_device_id": "system", 00:13:30.255 "dma_device_type": 1 00:13:30.255 }, 00:13:30.255 { 00:13:30.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.255 "dma_device_type": 2 00:13:30.255 }, 00:13:30.255 { 00:13:30.255 "dma_device_id": "system", 00:13:30.255 "dma_device_type": 1 00:13:30.255 }, 00:13:30.255 { 00:13:30.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:30.255 "dma_device_type": 2 00:13:30.255 } 00:13:30.255 ], 00:13:30.255 "driver_specific": { 00:13:30.255 "raid": { 00:13:30.255 "uuid": "97b0bc5c-997c-4d57-80c5-90891b27f910", 00:13:30.255 "strip_size_kb": 0, 00:13:30.255 "state": "online", 00:13:30.255 "raid_level": "raid1", 00:13:30.255 "superblock": false, 00:13:30.255 "num_base_bdevs": 4, 00:13:30.255 "num_base_bdevs_discovered": 4, 00:13:30.255 "num_base_bdevs_operational": 4, 00:13:30.255 "base_bdevs_list": [ 00:13:30.255 { 00:13:30.255 "name": "NewBaseBdev", 00:13:30.255 "uuid": "ddcc0c25-17d5-498a-9bb1-2edb8e66f49d", 00:13:30.255 "is_configured": true, 00:13:30.255 "data_offset": 0, 00:13:30.255 "data_size": 65536 00:13:30.255 }, 00:13:30.255 { 00:13:30.255 "name": "BaseBdev2", 00:13:30.255 "uuid": "17cff1f1-5efb-4f09-a6c6-9311e28c8bbd", 00:13:30.255 "is_configured": true, 00:13:30.255 "data_offset": 0, 00:13:30.255 "data_size": 65536 00:13:30.255 }, 00:13:30.255 { 00:13:30.255 "name": "BaseBdev3", 00:13:30.255 "uuid": "2eb54851-bc9e-48d3-97e4-6fd2f3189970", 00:13:30.255 "is_configured": true, 00:13:30.255 "data_offset": 0, 00:13:30.255 "data_size": 65536 00:13:30.255 }, 00:13:30.255 { 00:13:30.255 "name": "BaseBdev4", 00:13:30.255 "uuid": "d88f4d03-749d-45e1-b493-2c7360bca706", 00:13:30.255 "is_configured": true, 00:13:30.255 "data_offset": 0, 00:13:30.255 "data_size": 65536 00:13:30.255 } 00:13:30.255 ] 00:13:30.255 } 00:13:30.255 } 00:13:30.255 }' 00:13:30.255 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:30.255 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:30.255 BaseBdev2 00:13:30.255 BaseBdev3 00:13:30.255 BaseBdev4' 00:13:30.255 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.255 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:30.255 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.255 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:30.255 14:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.255 14:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.255 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.255 14:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.255 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.255 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.255 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.255 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:30.255 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.255 14:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.255 14:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.255 14:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.255 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.255 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.255 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.255 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:30.255 14:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.255 14:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.255 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.255 14:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.255 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.255 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.255 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:30.255 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:30.255 14:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.255 14:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.255 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:30.255 14:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.514 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:30.514 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:30.514 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:30.514 14:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:30.514 14:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:30.514 [2024-11-27 14:13:07.555457] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:30.514 [2024-11-27 14:13:07.555658] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:30.514 [2024-11-27 14:13:07.555819] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:30.514 [2024-11-27 14:13:07.556251] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:30.514 [2024-11-27 14:13:07.556273] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:30.514 14:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:30.514 14:13:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 73226 00:13:30.514 14:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 73226 ']' 00:13:30.514 14:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # kill -0 73226 00:13:30.514 14:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # uname 00:13:30.514 14:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:30.514 14:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73226 00:13:30.514 14:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:30.514 14:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:30.514 14:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73226' 00:13:30.514 killing process with pid 73226 00:13:30.514 14:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@973 -- # kill 73226 00:13:30.514 [2024-11-27 14:13:07.596872] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:30.514 14:13:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@978 -- # wait 73226 00:13:30.773 [2024-11-27 14:13:07.947006] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:31.710 14:13:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:13:31.710 00:13:31.710 real 0m12.886s 00:13:31.710 user 0m21.455s 00:13:31.710 sys 0m1.756s 00:13:31.710 ************************************ 00:13:31.710 END TEST raid_state_function_test 00:13:31.710 ************************************ 00:13:31.710 14:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:31.710 14:13:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:31.970 14:13:09 bdev_raid -- bdev/bdev_raid.sh@969 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:13:31.970 14:13:09 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:31.970 14:13:09 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:31.970 14:13:09 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:31.970 ************************************ 00:13:31.970 START TEST raid_state_function_test_sb 00:13:31.970 ************************************ 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 4 true 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:13:31.970 Process raid pid: 73910 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=73910 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 73910' 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 73910 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 73910 ']' 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:31.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:31.970 14:13:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.970 [2024-11-27 14:13:09.146054] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:13:31.970 [2024-11-27 14:13:09.146541] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:32.230 [2024-11-27 14:13:09.332467] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:32.230 [2024-11-27 14:13:09.464120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:32.489 [2024-11-27 14:13:09.667148] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:32.489 [2024-11-27 14:13:09.667428] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:33.059 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:33.059 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:13:33.059 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:33.059 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.059 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.059 [2024-11-27 14:13:10.150850] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:33.059 [2024-11-27 14:13:10.150910] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:33.059 [2024-11-27 14:13:10.150928] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:33.059 [2024-11-27 14:13:10.150944] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:33.059 [2024-11-27 14:13:10.150954] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:33.059 [2024-11-27 14:13:10.150968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:33.059 [2024-11-27 14:13:10.150983] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:33.059 [2024-11-27 14:13:10.150997] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:33.059 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.059 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:33.059 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.059 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:33.059 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.059 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.059 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:33.059 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.059 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.059 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.059 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.059 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.059 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.059 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.059 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.059 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.059 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.059 "name": "Existed_Raid", 00:13:33.059 "uuid": "18d0ba8f-27e2-485b-8c15-ed326363a5a2", 00:13:33.059 "strip_size_kb": 0, 00:13:33.059 "state": "configuring", 00:13:33.059 "raid_level": "raid1", 00:13:33.059 "superblock": true, 00:13:33.059 "num_base_bdevs": 4, 00:13:33.059 "num_base_bdevs_discovered": 0, 00:13:33.059 "num_base_bdevs_operational": 4, 00:13:33.059 "base_bdevs_list": [ 00:13:33.059 { 00:13:33.059 "name": "BaseBdev1", 00:13:33.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.059 "is_configured": false, 00:13:33.059 "data_offset": 0, 00:13:33.059 "data_size": 0 00:13:33.059 }, 00:13:33.059 { 00:13:33.059 "name": "BaseBdev2", 00:13:33.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.059 "is_configured": false, 00:13:33.059 "data_offset": 0, 00:13:33.059 "data_size": 0 00:13:33.059 }, 00:13:33.059 { 00:13:33.059 "name": "BaseBdev3", 00:13:33.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.059 "is_configured": false, 00:13:33.059 "data_offset": 0, 00:13:33.059 "data_size": 0 00:13:33.059 }, 00:13:33.059 { 00:13:33.059 "name": "BaseBdev4", 00:13:33.059 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.059 "is_configured": false, 00:13:33.059 "data_offset": 0, 00:13:33.059 "data_size": 0 00:13:33.059 } 00:13:33.059 ] 00:13:33.059 }' 00:13:33.059 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.059 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.627 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:33.627 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.627 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.627 [2024-11-27 14:13:10.666915] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:33.627 [2024-11-27 14:13:10.667090] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:13:33.627 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.627 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:33.627 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.627 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.627 [2024-11-27 14:13:10.674910] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:33.627 [2024-11-27 14:13:10.674960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:33.627 [2024-11-27 14:13:10.674976] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:33.627 [2024-11-27 14:13:10.674991] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:33.627 [2024-11-27 14:13:10.675000] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:33.627 [2024-11-27 14:13:10.675014] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:33.627 [2024-11-27 14:13:10.675023] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:33.627 [2024-11-27 14:13:10.675037] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:33.627 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.627 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:33.627 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.628 [2024-11-27 14:13:10.720951] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:33.628 BaseBdev1 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.628 [ 00:13:33.628 { 00:13:33.628 "name": "BaseBdev1", 00:13:33.628 "aliases": [ 00:13:33.628 "dee3de46-6d3a-4309-8253-378fa1bb0cde" 00:13:33.628 ], 00:13:33.628 "product_name": "Malloc disk", 00:13:33.628 "block_size": 512, 00:13:33.628 "num_blocks": 65536, 00:13:33.628 "uuid": "dee3de46-6d3a-4309-8253-378fa1bb0cde", 00:13:33.628 "assigned_rate_limits": { 00:13:33.628 "rw_ios_per_sec": 0, 00:13:33.628 "rw_mbytes_per_sec": 0, 00:13:33.628 "r_mbytes_per_sec": 0, 00:13:33.628 "w_mbytes_per_sec": 0 00:13:33.628 }, 00:13:33.628 "claimed": true, 00:13:33.628 "claim_type": "exclusive_write", 00:13:33.628 "zoned": false, 00:13:33.628 "supported_io_types": { 00:13:33.628 "read": true, 00:13:33.628 "write": true, 00:13:33.628 "unmap": true, 00:13:33.628 "flush": true, 00:13:33.628 "reset": true, 00:13:33.628 "nvme_admin": false, 00:13:33.628 "nvme_io": false, 00:13:33.628 "nvme_io_md": false, 00:13:33.628 "write_zeroes": true, 00:13:33.628 "zcopy": true, 00:13:33.628 "get_zone_info": false, 00:13:33.628 "zone_management": false, 00:13:33.628 "zone_append": false, 00:13:33.628 "compare": false, 00:13:33.628 "compare_and_write": false, 00:13:33.628 "abort": true, 00:13:33.628 "seek_hole": false, 00:13:33.628 "seek_data": false, 00:13:33.628 "copy": true, 00:13:33.628 "nvme_iov_md": false 00:13:33.628 }, 00:13:33.628 "memory_domains": [ 00:13:33.628 { 00:13:33.628 "dma_device_id": "system", 00:13:33.628 "dma_device_type": 1 00:13:33.628 }, 00:13:33.628 { 00:13:33.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:33.628 "dma_device_type": 2 00:13:33.628 } 00:13:33.628 ], 00:13:33.628 "driver_specific": {} 00:13:33.628 } 00:13:33.628 ] 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:33.628 "name": "Existed_Raid", 00:13:33.628 "uuid": "32f0fc93-93f2-4beb-8e42-1ee3fc76f624", 00:13:33.628 "strip_size_kb": 0, 00:13:33.628 "state": "configuring", 00:13:33.628 "raid_level": "raid1", 00:13:33.628 "superblock": true, 00:13:33.628 "num_base_bdevs": 4, 00:13:33.628 "num_base_bdevs_discovered": 1, 00:13:33.628 "num_base_bdevs_operational": 4, 00:13:33.628 "base_bdevs_list": [ 00:13:33.628 { 00:13:33.628 "name": "BaseBdev1", 00:13:33.628 "uuid": "dee3de46-6d3a-4309-8253-378fa1bb0cde", 00:13:33.628 "is_configured": true, 00:13:33.628 "data_offset": 2048, 00:13:33.628 "data_size": 63488 00:13:33.628 }, 00:13:33.628 { 00:13:33.628 "name": "BaseBdev2", 00:13:33.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.628 "is_configured": false, 00:13:33.628 "data_offset": 0, 00:13:33.628 "data_size": 0 00:13:33.628 }, 00:13:33.628 { 00:13:33.628 "name": "BaseBdev3", 00:13:33.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.628 "is_configured": false, 00:13:33.628 "data_offset": 0, 00:13:33.628 "data_size": 0 00:13:33.628 }, 00:13:33.628 { 00:13:33.628 "name": "BaseBdev4", 00:13:33.628 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:33.628 "is_configured": false, 00:13:33.628 "data_offset": 0, 00:13:33.628 "data_size": 0 00:13:33.628 } 00:13:33.628 ] 00:13:33.628 }' 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:33.628 14:13:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.197 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:34.197 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.197 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.197 [2024-11-27 14:13:11.269179] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:34.197 [2024-11-27 14:13:11.269256] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:13:34.197 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.197 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:34.197 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.197 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.197 [2024-11-27 14:13:11.277223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:34.197 [2024-11-27 14:13:11.279701] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:34.197 [2024-11-27 14:13:11.279914] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:34.197 [2024-11-27 14:13:11.279942] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:34.197 [2024-11-27 14:13:11.279970] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:34.197 [2024-11-27 14:13:11.279980] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:34.197 [2024-11-27 14:13:11.279993] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:34.197 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.197 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:13:34.197 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:34.197 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:34.197 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.197 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.197 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.197 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.197 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.197 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.197 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.197 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.197 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.197 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.197 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.197 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.197 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.197 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.197 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.197 "name": "Existed_Raid", 00:13:34.197 "uuid": "09c77a60-239f-4bd3-a5e3-60d351dbc48c", 00:13:34.197 "strip_size_kb": 0, 00:13:34.197 "state": "configuring", 00:13:34.197 "raid_level": "raid1", 00:13:34.197 "superblock": true, 00:13:34.197 "num_base_bdevs": 4, 00:13:34.197 "num_base_bdevs_discovered": 1, 00:13:34.197 "num_base_bdevs_operational": 4, 00:13:34.197 "base_bdevs_list": [ 00:13:34.197 { 00:13:34.197 "name": "BaseBdev1", 00:13:34.197 "uuid": "dee3de46-6d3a-4309-8253-378fa1bb0cde", 00:13:34.197 "is_configured": true, 00:13:34.197 "data_offset": 2048, 00:13:34.197 "data_size": 63488 00:13:34.197 }, 00:13:34.197 { 00:13:34.197 "name": "BaseBdev2", 00:13:34.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.197 "is_configured": false, 00:13:34.197 "data_offset": 0, 00:13:34.197 "data_size": 0 00:13:34.197 }, 00:13:34.197 { 00:13:34.197 "name": "BaseBdev3", 00:13:34.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.197 "is_configured": false, 00:13:34.197 "data_offset": 0, 00:13:34.197 "data_size": 0 00:13:34.197 }, 00:13:34.197 { 00:13:34.197 "name": "BaseBdev4", 00:13:34.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.197 "is_configured": false, 00:13:34.197 "data_offset": 0, 00:13:34.197 "data_size": 0 00:13:34.197 } 00:13:34.197 ] 00:13:34.197 }' 00:13:34.197 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.197 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.765 [2024-11-27 14:13:11.835755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:34.765 BaseBdev2 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.765 [ 00:13:34.765 { 00:13:34.765 "name": "BaseBdev2", 00:13:34.765 "aliases": [ 00:13:34.765 "de235bcc-3d49-4389-893b-346299bbbf88" 00:13:34.765 ], 00:13:34.765 "product_name": "Malloc disk", 00:13:34.765 "block_size": 512, 00:13:34.765 "num_blocks": 65536, 00:13:34.765 "uuid": "de235bcc-3d49-4389-893b-346299bbbf88", 00:13:34.765 "assigned_rate_limits": { 00:13:34.765 "rw_ios_per_sec": 0, 00:13:34.765 "rw_mbytes_per_sec": 0, 00:13:34.765 "r_mbytes_per_sec": 0, 00:13:34.765 "w_mbytes_per_sec": 0 00:13:34.765 }, 00:13:34.765 "claimed": true, 00:13:34.765 "claim_type": "exclusive_write", 00:13:34.765 "zoned": false, 00:13:34.765 "supported_io_types": { 00:13:34.765 "read": true, 00:13:34.765 "write": true, 00:13:34.765 "unmap": true, 00:13:34.765 "flush": true, 00:13:34.765 "reset": true, 00:13:34.765 "nvme_admin": false, 00:13:34.765 "nvme_io": false, 00:13:34.765 "nvme_io_md": false, 00:13:34.765 "write_zeroes": true, 00:13:34.765 "zcopy": true, 00:13:34.765 "get_zone_info": false, 00:13:34.765 "zone_management": false, 00:13:34.765 "zone_append": false, 00:13:34.765 "compare": false, 00:13:34.765 "compare_and_write": false, 00:13:34.765 "abort": true, 00:13:34.765 "seek_hole": false, 00:13:34.765 "seek_data": false, 00:13:34.765 "copy": true, 00:13:34.765 "nvme_iov_md": false 00:13:34.765 }, 00:13:34.765 "memory_domains": [ 00:13:34.765 { 00:13:34.765 "dma_device_id": "system", 00:13:34.765 "dma_device_type": 1 00:13:34.765 }, 00:13:34.765 { 00:13:34.765 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:34.765 "dma_device_type": 2 00:13:34.765 } 00:13:34.765 ], 00:13:34.765 "driver_specific": {} 00:13:34.765 } 00:13:34.765 ] 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.765 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:34.765 "name": "Existed_Raid", 00:13:34.765 "uuid": "09c77a60-239f-4bd3-a5e3-60d351dbc48c", 00:13:34.765 "strip_size_kb": 0, 00:13:34.765 "state": "configuring", 00:13:34.765 "raid_level": "raid1", 00:13:34.765 "superblock": true, 00:13:34.765 "num_base_bdevs": 4, 00:13:34.765 "num_base_bdevs_discovered": 2, 00:13:34.765 "num_base_bdevs_operational": 4, 00:13:34.766 "base_bdevs_list": [ 00:13:34.766 { 00:13:34.766 "name": "BaseBdev1", 00:13:34.766 "uuid": "dee3de46-6d3a-4309-8253-378fa1bb0cde", 00:13:34.766 "is_configured": true, 00:13:34.766 "data_offset": 2048, 00:13:34.766 "data_size": 63488 00:13:34.766 }, 00:13:34.766 { 00:13:34.766 "name": "BaseBdev2", 00:13:34.766 "uuid": "de235bcc-3d49-4389-893b-346299bbbf88", 00:13:34.766 "is_configured": true, 00:13:34.766 "data_offset": 2048, 00:13:34.766 "data_size": 63488 00:13:34.766 }, 00:13:34.766 { 00:13:34.766 "name": "BaseBdev3", 00:13:34.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.766 "is_configured": false, 00:13:34.766 "data_offset": 0, 00:13:34.766 "data_size": 0 00:13:34.766 }, 00:13:34.766 { 00:13:34.766 "name": "BaseBdev4", 00:13:34.766 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:34.766 "is_configured": false, 00:13:34.766 "data_offset": 0, 00:13:34.766 "data_size": 0 00:13:34.766 } 00:13:34.766 ] 00:13:34.766 }' 00:13:34.766 14:13:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:34.766 14:13:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.334 [2024-11-27 14:13:12.443143] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:35.334 BaseBdev3 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.334 [ 00:13:35.334 { 00:13:35.334 "name": "BaseBdev3", 00:13:35.334 "aliases": [ 00:13:35.334 "44a63960-cb44-41ea-945f-da08a930ecf0" 00:13:35.334 ], 00:13:35.334 "product_name": "Malloc disk", 00:13:35.334 "block_size": 512, 00:13:35.334 "num_blocks": 65536, 00:13:35.334 "uuid": "44a63960-cb44-41ea-945f-da08a930ecf0", 00:13:35.334 "assigned_rate_limits": { 00:13:35.334 "rw_ios_per_sec": 0, 00:13:35.334 "rw_mbytes_per_sec": 0, 00:13:35.334 "r_mbytes_per_sec": 0, 00:13:35.334 "w_mbytes_per_sec": 0 00:13:35.334 }, 00:13:35.334 "claimed": true, 00:13:35.334 "claim_type": "exclusive_write", 00:13:35.334 "zoned": false, 00:13:35.334 "supported_io_types": { 00:13:35.334 "read": true, 00:13:35.334 "write": true, 00:13:35.334 "unmap": true, 00:13:35.334 "flush": true, 00:13:35.334 "reset": true, 00:13:35.334 "nvme_admin": false, 00:13:35.334 "nvme_io": false, 00:13:35.334 "nvme_io_md": false, 00:13:35.334 "write_zeroes": true, 00:13:35.334 "zcopy": true, 00:13:35.334 "get_zone_info": false, 00:13:35.334 "zone_management": false, 00:13:35.334 "zone_append": false, 00:13:35.334 "compare": false, 00:13:35.334 "compare_and_write": false, 00:13:35.334 "abort": true, 00:13:35.334 "seek_hole": false, 00:13:35.334 "seek_data": false, 00:13:35.334 "copy": true, 00:13:35.334 "nvme_iov_md": false 00:13:35.334 }, 00:13:35.334 "memory_domains": [ 00:13:35.334 { 00:13:35.334 "dma_device_id": "system", 00:13:35.334 "dma_device_type": 1 00:13:35.334 }, 00:13:35.334 { 00:13:35.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.334 "dma_device_type": 2 00:13:35.334 } 00:13:35.334 ], 00:13:35.334 "driver_specific": {} 00:13:35.334 } 00:13:35.334 ] 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.334 "name": "Existed_Raid", 00:13:35.334 "uuid": "09c77a60-239f-4bd3-a5e3-60d351dbc48c", 00:13:35.334 "strip_size_kb": 0, 00:13:35.334 "state": "configuring", 00:13:35.334 "raid_level": "raid1", 00:13:35.334 "superblock": true, 00:13:35.334 "num_base_bdevs": 4, 00:13:35.334 "num_base_bdevs_discovered": 3, 00:13:35.334 "num_base_bdevs_operational": 4, 00:13:35.334 "base_bdevs_list": [ 00:13:35.334 { 00:13:35.334 "name": "BaseBdev1", 00:13:35.334 "uuid": "dee3de46-6d3a-4309-8253-378fa1bb0cde", 00:13:35.334 "is_configured": true, 00:13:35.334 "data_offset": 2048, 00:13:35.334 "data_size": 63488 00:13:35.334 }, 00:13:35.334 { 00:13:35.334 "name": "BaseBdev2", 00:13:35.334 "uuid": "de235bcc-3d49-4389-893b-346299bbbf88", 00:13:35.334 "is_configured": true, 00:13:35.334 "data_offset": 2048, 00:13:35.334 "data_size": 63488 00:13:35.334 }, 00:13:35.334 { 00:13:35.334 "name": "BaseBdev3", 00:13:35.334 "uuid": "44a63960-cb44-41ea-945f-da08a930ecf0", 00:13:35.334 "is_configured": true, 00:13:35.334 "data_offset": 2048, 00:13:35.334 "data_size": 63488 00:13:35.334 }, 00:13:35.334 { 00:13:35.334 "name": "BaseBdev4", 00:13:35.334 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:35.334 "is_configured": false, 00:13:35.334 "data_offset": 0, 00:13:35.334 "data_size": 0 00:13:35.334 } 00:13:35.334 ] 00:13:35.334 }' 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.334 14:13:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.903 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:35.903 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.903 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.903 [2024-11-27 14:13:13.049121] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:35.903 [2024-11-27 14:13:13.049468] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:35.903 [2024-11-27 14:13:13.049487] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:35.903 BaseBdev4 00:13:35.903 [2024-11-27 14:13:13.049836] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:35.903 [2024-11-27 14:13:13.050058] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:35.903 [2024-11-27 14:13:13.050078] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:13:35.903 [2024-11-27 14:13:13.050272] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:35.903 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.903 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:13:35.903 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:35.903 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:35.903 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:35.903 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:35.903 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:35.903 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:35.903 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.903 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.903 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.903 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:35.903 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.903 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.903 [ 00:13:35.903 { 00:13:35.903 "name": "BaseBdev4", 00:13:35.903 "aliases": [ 00:13:35.903 "b5dc7cb2-0483-401e-b4ee-fb2fef88af66" 00:13:35.903 ], 00:13:35.903 "product_name": "Malloc disk", 00:13:35.903 "block_size": 512, 00:13:35.903 "num_blocks": 65536, 00:13:35.903 "uuid": "b5dc7cb2-0483-401e-b4ee-fb2fef88af66", 00:13:35.903 "assigned_rate_limits": { 00:13:35.903 "rw_ios_per_sec": 0, 00:13:35.903 "rw_mbytes_per_sec": 0, 00:13:35.903 "r_mbytes_per_sec": 0, 00:13:35.903 "w_mbytes_per_sec": 0 00:13:35.903 }, 00:13:35.903 "claimed": true, 00:13:35.903 "claim_type": "exclusive_write", 00:13:35.903 "zoned": false, 00:13:35.903 "supported_io_types": { 00:13:35.903 "read": true, 00:13:35.903 "write": true, 00:13:35.903 "unmap": true, 00:13:35.903 "flush": true, 00:13:35.903 "reset": true, 00:13:35.903 "nvme_admin": false, 00:13:35.903 "nvme_io": false, 00:13:35.903 "nvme_io_md": false, 00:13:35.903 "write_zeroes": true, 00:13:35.903 "zcopy": true, 00:13:35.903 "get_zone_info": false, 00:13:35.903 "zone_management": false, 00:13:35.903 "zone_append": false, 00:13:35.903 "compare": false, 00:13:35.903 "compare_and_write": false, 00:13:35.903 "abort": true, 00:13:35.903 "seek_hole": false, 00:13:35.903 "seek_data": false, 00:13:35.903 "copy": true, 00:13:35.903 "nvme_iov_md": false 00:13:35.903 }, 00:13:35.903 "memory_domains": [ 00:13:35.903 { 00:13:35.903 "dma_device_id": "system", 00:13:35.903 "dma_device_type": 1 00:13:35.903 }, 00:13:35.903 { 00:13:35.903 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:35.903 "dma_device_type": 2 00:13:35.903 } 00:13:35.903 ], 00:13:35.903 "driver_specific": {} 00:13:35.903 } 00:13:35.903 ] 00:13:35.903 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.903 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:35.903 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:13:35.904 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:13:35.904 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:35.904 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:35.904 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:35.904 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:35.904 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:35.904 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:35.904 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:35.904 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:35.904 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:35.904 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:35.904 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:35.904 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:35.904 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.904 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.904 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:35.904 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:35.904 "name": "Existed_Raid", 00:13:35.904 "uuid": "09c77a60-239f-4bd3-a5e3-60d351dbc48c", 00:13:35.904 "strip_size_kb": 0, 00:13:35.904 "state": "online", 00:13:35.904 "raid_level": "raid1", 00:13:35.904 "superblock": true, 00:13:35.904 "num_base_bdevs": 4, 00:13:35.904 "num_base_bdevs_discovered": 4, 00:13:35.904 "num_base_bdevs_operational": 4, 00:13:35.904 "base_bdevs_list": [ 00:13:35.904 { 00:13:35.904 "name": "BaseBdev1", 00:13:35.904 "uuid": "dee3de46-6d3a-4309-8253-378fa1bb0cde", 00:13:35.904 "is_configured": true, 00:13:35.904 "data_offset": 2048, 00:13:35.904 "data_size": 63488 00:13:35.904 }, 00:13:35.904 { 00:13:35.904 "name": "BaseBdev2", 00:13:35.904 "uuid": "de235bcc-3d49-4389-893b-346299bbbf88", 00:13:35.904 "is_configured": true, 00:13:35.904 "data_offset": 2048, 00:13:35.904 "data_size": 63488 00:13:35.904 }, 00:13:35.904 { 00:13:35.904 "name": "BaseBdev3", 00:13:35.904 "uuid": "44a63960-cb44-41ea-945f-da08a930ecf0", 00:13:35.904 "is_configured": true, 00:13:35.904 "data_offset": 2048, 00:13:35.904 "data_size": 63488 00:13:35.904 }, 00:13:35.904 { 00:13:35.904 "name": "BaseBdev4", 00:13:35.904 "uuid": "b5dc7cb2-0483-401e-b4ee-fb2fef88af66", 00:13:35.904 "is_configured": true, 00:13:35.904 "data_offset": 2048, 00:13:35.904 "data_size": 63488 00:13:35.904 } 00:13:35.904 ] 00:13:35.904 }' 00:13:35.904 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:35.904 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.472 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:13:36.472 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:36.472 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:36.472 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:36.472 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:36.472 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:36.472 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:36.472 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.472 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.472 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:36.472 [2024-11-27 14:13:13.617795] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:36.472 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.472 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:36.472 "name": "Existed_Raid", 00:13:36.472 "aliases": [ 00:13:36.472 "09c77a60-239f-4bd3-a5e3-60d351dbc48c" 00:13:36.472 ], 00:13:36.472 "product_name": "Raid Volume", 00:13:36.472 "block_size": 512, 00:13:36.472 "num_blocks": 63488, 00:13:36.472 "uuid": "09c77a60-239f-4bd3-a5e3-60d351dbc48c", 00:13:36.472 "assigned_rate_limits": { 00:13:36.472 "rw_ios_per_sec": 0, 00:13:36.472 "rw_mbytes_per_sec": 0, 00:13:36.472 "r_mbytes_per_sec": 0, 00:13:36.472 "w_mbytes_per_sec": 0 00:13:36.472 }, 00:13:36.472 "claimed": false, 00:13:36.472 "zoned": false, 00:13:36.472 "supported_io_types": { 00:13:36.472 "read": true, 00:13:36.472 "write": true, 00:13:36.472 "unmap": false, 00:13:36.472 "flush": false, 00:13:36.472 "reset": true, 00:13:36.472 "nvme_admin": false, 00:13:36.472 "nvme_io": false, 00:13:36.472 "nvme_io_md": false, 00:13:36.472 "write_zeroes": true, 00:13:36.472 "zcopy": false, 00:13:36.472 "get_zone_info": false, 00:13:36.472 "zone_management": false, 00:13:36.472 "zone_append": false, 00:13:36.472 "compare": false, 00:13:36.472 "compare_and_write": false, 00:13:36.472 "abort": false, 00:13:36.472 "seek_hole": false, 00:13:36.472 "seek_data": false, 00:13:36.472 "copy": false, 00:13:36.472 "nvme_iov_md": false 00:13:36.472 }, 00:13:36.472 "memory_domains": [ 00:13:36.472 { 00:13:36.472 "dma_device_id": "system", 00:13:36.472 "dma_device_type": 1 00:13:36.472 }, 00:13:36.472 { 00:13:36.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.472 "dma_device_type": 2 00:13:36.472 }, 00:13:36.472 { 00:13:36.472 "dma_device_id": "system", 00:13:36.472 "dma_device_type": 1 00:13:36.472 }, 00:13:36.472 { 00:13:36.472 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.472 "dma_device_type": 2 00:13:36.472 }, 00:13:36.472 { 00:13:36.472 "dma_device_id": "system", 00:13:36.472 "dma_device_type": 1 00:13:36.472 }, 00:13:36.472 { 00:13:36.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.473 "dma_device_type": 2 00:13:36.473 }, 00:13:36.473 { 00:13:36.473 "dma_device_id": "system", 00:13:36.473 "dma_device_type": 1 00:13:36.473 }, 00:13:36.473 { 00:13:36.473 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:36.473 "dma_device_type": 2 00:13:36.473 } 00:13:36.473 ], 00:13:36.473 "driver_specific": { 00:13:36.473 "raid": { 00:13:36.473 "uuid": "09c77a60-239f-4bd3-a5e3-60d351dbc48c", 00:13:36.473 "strip_size_kb": 0, 00:13:36.473 "state": "online", 00:13:36.473 "raid_level": "raid1", 00:13:36.473 "superblock": true, 00:13:36.473 "num_base_bdevs": 4, 00:13:36.473 "num_base_bdevs_discovered": 4, 00:13:36.473 "num_base_bdevs_operational": 4, 00:13:36.473 "base_bdevs_list": [ 00:13:36.473 { 00:13:36.473 "name": "BaseBdev1", 00:13:36.473 "uuid": "dee3de46-6d3a-4309-8253-378fa1bb0cde", 00:13:36.473 "is_configured": true, 00:13:36.473 "data_offset": 2048, 00:13:36.473 "data_size": 63488 00:13:36.473 }, 00:13:36.473 { 00:13:36.473 "name": "BaseBdev2", 00:13:36.473 "uuid": "de235bcc-3d49-4389-893b-346299bbbf88", 00:13:36.473 "is_configured": true, 00:13:36.473 "data_offset": 2048, 00:13:36.473 "data_size": 63488 00:13:36.473 }, 00:13:36.473 { 00:13:36.473 "name": "BaseBdev3", 00:13:36.473 "uuid": "44a63960-cb44-41ea-945f-da08a930ecf0", 00:13:36.473 "is_configured": true, 00:13:36.473 "data_offset": 2048, 00:13:36.473 "data_size": 63488 00:13:36.473 }, 00:13:36.473 { 00:13:36.473 "name": "BaseBdev4", 00:13:36.473 "uuid": "b5dc7cb2-0483-401e-b4ee-fb2fef88af66", 00:13:36.473 "is_configured": true, 00:13:36.473 "data_offset": 2048, 00:13:36.473 "data_size": 63488 00:13:36.473 } 00:13:36.473 ] 00:13:36.473 } 00:13:36.473 } 00:13:36.473 }' 00:13:36.473 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:36.473 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:13:36.473 BaseBdev2 00:13:36.473 BaseBdev3 00:13:36.473 BaseBdev4' 00:13:36.473 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.732 14:13:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.732 [2024-11-27 14:13:13.985640] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:36.991 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.992 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:13:36.992 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:13:36.992 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:36.992 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:13:36.992 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:13:36.992 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:13:36.992 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:36.992 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:36.992 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:36.992 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:36.992 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:36.992 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:36.992 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:36.992 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:36.992 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:36.992 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:36.992 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:36.992 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.992 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:36.992 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:36.992 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:36.992 "name": "Existed_Raid", 00:13:36.992 "uuid": "09c77a60-239f-4bd3-a5e3-60d351dbc48c", 00:13:36.992 "strip_size_kb": 0, 00:13:36.992 "state": "online", 00:13:36.992 "raid_level": "raid1", 00:13:36.992 "superblock": true, 00:13:36.992 "num_base_bdevs": 4, 00:13:36.992 "num_base_bdevs_discovered": 3, 00:13:36.992 "num_base_bdevs_operational": 3, 00:13:36.992 "base_bdevs_list": [ 00:13:36.992 { 00:13:36.992 "name": null, 00:13:36.992 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:36.992 "is_configured": false, 00:13:36.992 "data_offset": 0, 00:13:36.992 "data_size": 63488 00:13:36.992 }, 00:13:36.992 { 00:13:36.992 "name": "BaseBdev2", 00:13:36.992 "uuid": "de235bcc-3d49-4389-893b-346299bbbf88", 00:13:36.992 "is_configured": true, 00:13:36.992 "data_offset": 2048, 00:13:36.992 "data_size": 63488 00:13:36.992 }, 00:13:36.992 { 00:13:36.992 "name": "BaseBdev3", 00:13:36.992 "uuid": "44a63960-cb44-41ea-945f-da08a930ecf0", 00:13:36.992 "is_configured": true, 00:13:36.992 "data_offset": 2048, 00:13:36.992 "data_size": 63488 00:13:36.992 }, 00:13:36.992 { 00:13:36.992 "name": "BaseBdev4", 00:13:36.992 "uuid": "b5dc7cb2-0483-401e-b4ee-fb2fef88af66", 00:13:36.992 "is_configured": true, 00:13:36.992 "data_offset": 2048, 00:13:36.992 "data_size": 63488 00:13:36.992 } 00:13:36.992 ] 00:13:36.992 }' 00:13:36.992 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:36.992 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.560 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:13:37.560 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:37.560 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.560 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:37.560 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.560 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.560 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.560 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:37.560 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:37.560 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:13:37.560 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.560 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.560 [2024-11-27 14:13:14.658246] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:37.560 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.560 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:37.561 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:37.561 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:37.561 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.561 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.561 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.561 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.561 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:37.561 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:37.561 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:13:37.561 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.561 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.561 [2024-11-27 14:13:14.803458] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:37.820 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.820 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:37.820 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:37.820 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.820 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:13:37.820 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.820 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.820 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.820 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:13:37.820 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:37.820 14:13:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:13:37.820 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.820 14:13:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.820 [2024-11-27 14:13:14.946863] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:37.820 [2024-11-27 14:13:14.947132] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:37.820 [2024-11-27 14:13:15.033892] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:37.820 [2024-11-27 14:13:15.033956] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:37.820 [2024-11-27 14:13:15.033976] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:13:37.820 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.820 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:13:37.820 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:13:37.820 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:37.820 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:13:37.820 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.820 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.820 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:37.820 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:13:37.820 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:13:37.820 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:13:37.820 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:13:37.820 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:37.820 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:13:37.820 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:37.820 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.080 BaseBdev2 00:13:38.080 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.080 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:13:38.080 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:13:38.080 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:38.080 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:38.080 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:38.080 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:38.080 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:38.080 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.080 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.080 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.081 [ 00:13:38.081 { 00:13:38.081 "name": "BaseBdev2", 00:13:38.081 "aliases": [ 00:13:38.081 "8019adef-be0a-488d-a05c-9c770b8fd13f" 00:13:38.081 ], 00:13:38.081 "product_name": "Malloc disk", 00:13:38.081 "block_size": 512, 00:13:38.081 "num_blocks": 65536, 00:13:38.081 "uuid": "8019adef-be0a-488d-a05c-9c770b8fd13f", 00:13:38.081 "assigned_rate_limits": { 00:13:38.081 "rw_ios_per_sec": 0, 00:13:38.081 "rw_mbytes_per_sec": 0, 00:13:38.081 "r_mbytes_per_sec": 0, 00:13:38.081 "w_mbytes_per_sec": 0 00:13:38.081 }, 00:13:38.081 "claimed": false, 00:13:38.081 "zoned": false, 00:13:38.081 "supported_io_types": { 00:13:38.081 "read": true, 00:13:38.081 "write": true, 00:13:38.081 "unmap": true, 00:13:38.081 "flush": true, 00:13:38.081 "reset": true, 00:13:38.081 "nvme_admin": false, 00:13:38.081 "nvme_io": false, 00:13:38.081 "nvme_io_md": false, 00:13:38.081 "write_zeroes": true, 00:13:38.081 "zcopy": true, 00:13:38.081 "get_zone_info": false, 00:13:38.081 "zone_management": false, 00:13:38.081 "zone_append": false, 00:13:38.081 "compare": false, 00:13:38.081 "compare_and_write": false, 00:13:38.081 "abort": true, 00:13:38.081 "seek_hole": false, 00:13:38.081 "seek_data": false, 00:13:38.081 "copy": true, 00:13:38.081 "nvme_iov_md": false 00:13:38.081 }, 00:13:38.081 "memory_domains": [ 00:13:38.081 { 00:13:38.081 "dma_device_id": "system", 00:13:38.081 "dma_device_type": 1 00:13:38.081 }, 00:13:38.081 { 00:13:38.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.081 "dma_device_type": 2 00:13:38.081 } 00:13:38.081 ], 00:13:38.081 "driver_specific": {} 00:13:38.081 } 00:13:38.081 ] 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.081 BaseBdev3 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.081 [ 00:13:38.081 { 00:13:38.081 "name": "BaseBdev3", 00:13:38.081 "aliases": [ 00:13:38.081 "a7e85f48-1702-488c-846a-0500c8866abb" 00:13:38.081 ], 00:13:38.081 "product_name": "Malloc disk", 00:13:38.081 "block_size": 512, 00:13:38.081 "num_blocks": 65536, 00:13:38.081 "uuid": "a7e85f48-1702-488c-846a-0500c8866abb", 00:13:38.081 "assigned_rate_limits": { 00:13:38.081 "rw_ios_per_sec": 0, 00:13:38.081 "rw_mbytes_per_sec": 0, 00:13:38.081 "r_mbytes_per_sec": 0, 00:13:38.081 "w_mbytes_per_sec": 0 00:13:38.081 }, 00:13:38.081 "claimed": false, 00:13:38.081 "zoned": false, 00:13:38.081 "supported_io_types": { 00:13:38.081 "read": true, 00:13:38.081 "write": true, 00:13:38.081 "unmap": true, 00:13:38.081 "flush": true, 00:13:38.081 "reset": true, 00:13:38.081 "nvme_admin": false, 00:13:38.081 "nvme_io": false, 00:13:38.081 "nvme_io_md": false, 00:13:38.081 "write_zeroes": true, 00:13:38.081 "zcopy": true, 00:13:38.081 "get_zone_info": false, 00:13:38.081 "zone_management": false, 00:13:38.081 "zone_append": false, 00:13:38.081 "compare": false, 00:13:38.081 "compare_and_write": false, 00:13:38.081 "abort": true, 00:13:38.081 "seek_hole": false, 00:13:38.081 "seek_data": false, 00:13:38.081 "copy": true, 00:13:38.081 "nvme_iov_md": false 00:13:38.081 }, 00:13:38.081 "memory_domains": [ 00:13:38.081 { 00:13:38.081 "dma_device_id": "system", 00:13:38.081 "dma_device_type": 1 00:13:38.081 }, 00:13:38.081 { 00:13:38.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.081 "dma_device_type": 2 00:13:38.081 } 00:13:38.081 ], 00:13:38.081 "driver_specific": {} 00:13:38.081 } 00:13:38.081 ] 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.081 BaseBdev4 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.081 [ 00:13:38.081 { 00:13:38.081 "name": "BaseBdev4", 00:13:38.081 "aliases": [ 00:13:38.081 "0bfdc89f-e1ee-46f9-bf34-511c73a37a79" 00:13:38.081 ], 00:13:38.081 "product_name": "Malloc disk", 00:13:38.081 "block_size": 512, 00:13:38.081 "num_blocks": 65536, 00:13:38.081 "uuid": "0bfdc89f-e1ee-46f9-bf34-511c73a37a79", 00:13:38.081 "assigned_rate_limits": { 00:13:38.081 "rw_ios_per_sec": 0, 00:13:38.081 "rw_mbytes_per_sec": 0, 00:13:38.081 "r_mbytes_per_sec": 0, 00:13:38.081 "w_mbytes_per_sec": 0 00:13:38.081 }, 00:13:38.081 "claimed": false, 00:13:38.081 "zoned": false, 00:13:38.081 "supported_io_types": { 00:13:38.081 "read": true, 00:13:38.081 "write": true, 00:13:38.081 "unmap": true, 00:13:38.081 "flush": true, 00:13:38.081 "reset": true, 00:13:38.081 "nvme_admin": false, 00:13:38.081 "nvme_io": false, 00:13:38.081 "nvme_io_md": false, 00:13:38.081 "write_zeroes": true, 00:13:38.081 "zcopy": true, 00:13:38.081 "get_zone_info": false, 00:13:38.081 "zone_management": false, 00:13:38.081 "zone_append": false, 00:13:38.081 "compare": false, 00:13:38.081 "compare_and_write": false, 00:13:38.081 "abort": true, 00:13:38.081 "seek_hole": false, 00:13:38.081 "seek_data": false, 00:13:38.081 "copy": true, 00:13:38.081 "nvme_iov_md": false 00:13:38.081 }, 00:13:38.081 "memory_domains": [ 00:13:38.081 { 00:13:38.081 "dma_device_id": "system", 00:13:38.081 "dma_device_type": 1 00:13:38.081 }, 00:13:38.081 { 00:13:38.081 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.081 "dma_device_type": 2 00:13:38.081 } 00:13:38.081 ], 00:13:38.081 "driver_specific": {} 00:13:38.081 } 00:13:38.081 ] 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:13:38.081 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:13:38.082 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:13:38.082 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.082 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.082 [2024-11-27 14:13:15.331777] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:38.082 [2024-11-27 14:13:15.332019] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:38.082 [2024-11-27 14:13:15.332152] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:38.082 [2024-11-27 14:13:15.334577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:38.082 [2024-11-27 14:13:15.334787] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:38.082 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.082 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:38.082 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:38.082 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:38.082 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.082 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.082 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:38.082 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.082 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.082 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.082 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.082 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.082 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.082 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.082 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.341 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.341 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.341 "name": "Existed_Raid", 00:13:38.341 "uuid": "e29ab660-6483-4216-b611-e0e4fa3a808e", 00:13:38.341 "strip_size_kb": 0, 00:13:38.341 "state": "configuring", 00:13:38.341 "raid_level": "raid1", 00:13:38.341 "superblock": true, 00:13:38.341 "num_base_bdevs": 4, 00:13:38.341 "num_base_bdevs_discovered": 3, 00:13:38.341 "num_base_bdevs_operational": 4, 00:13:38.341 "base_bdevs_list": [ 00:13:38.341 { 00:13:38.341 "name": "BaseBdev1", 00:13:38.341 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.341 "is_configured": false, 00:13:38.341 "data_offset": 0, 00:13:38.341 "data_size": 0 00:13:38.341 }, 00:13:38.341 { 00:13:38.341 "name": "BaseBdev2", 00:13:38.341 "uuid": "8019adef-be0a-488d-a05c-9c770b8fd13f", 00:13:38.341 "is_configured": true, 00:13:38.341 "data_offset": 2048, 00:13:38.341 "data_size": 63488 00:13:38.341 }, 00:13:38.341 { 00:13:38.341 "name": "BaseBdev3", 00:13:38.341 "uuid": "a7e85f48-1702-488c-846a-0500c8866abb", 00:13:38.341 "is_configured": true, 00:13:38.341 "data_offset": 2048, 00:13:38.341 "data_size": 63488 00:13:38.341 }, 00:13:38.341 { 00:13:38.341 "name": "BaseBdev4", 00:13:38.341 "uuid": "0bfdc89f-e1ee-46f9-bf34-511c73a37a79", 00:13:38.341 "is_configured": true, 00:13:38.341 "data_offset": 2048, 00:13:38.341 "data_size": 63488 00:13:38.341 } 00:13:38.341 ] 00:13:38.341 }' 00:13:38.341 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.341 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.916 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:13:38.916 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.916 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.916 [2024-11-27 14:13:15.888041] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:38.916 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.916 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:38.916 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:38.916 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:38.916 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:38.916 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:38.916 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:38.916 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:38.916 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:38.916 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:38.916 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:38.916 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:38.916 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:38.916 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:38.916 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:38.916 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:38.916 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:38.916 "name": "Existed_Raid", 00:13:38.916 "uuid": "e29ab660-6483-4216-b611-e0e4fa3a808e", 00:13:38.916 "strip_size_kb": 0, 00:13:38.916 "state": "configuring", 00:13:38.916 "raid_level": "raid1", 00:13:38.916 "superblock": true, 00:13:38.916 "num_base_bdevs": 4, 00:13:38.916 "num_base_bdevs_discovered": 2, 00:13:38.916 "num_base_bdevs_operational": 4, 00:13:38.916 "base_bdevs_list": [ 00:13:38.916 { 00:13:38.916 "name": "BaseBdev1", 00:13:38.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:38.916 "is_configured": false, 00:13:38.916 "data_offset": 0, 00:13:38.916 "data_size": 0 00:13:38.916 }, 00:13:38.916 { 00:13:38.916 "name": null, 00:13:38.916 "uuid": "8019adef-be0a-488d-a05c-9c770b8fd13f", 00:13:38.916 "is_configured": false, 00:13:38.916 "data_offset": 0, 00:13:38.916 "data_size": 63488 00:13:38.916 }, 00:13:38.916 { 00:13:38.916 "name": "BaseBdev3", 00:13:38.916 "uuid": "a7e85f48-1702-488c-846a-0500c8866abb", 00:13:38.916 "is_configured": true, 00:13:38.916 "data_offset": 2048, 00:13:38.916 "data_size": 63488 00:13:38.916 }, 00:13:38.916 { 00:13:38.916 "name": "BaseBdev4", 00:13:38.916 "uuid": "0bfdc89f-e1ee-46f9-bf34-511c73a37a79", 00:13:38.916 "is_configured": true, 00:13:38.916 "data_offset": 2048, 00:13:38.916 "data_size": 63488 00:13:38.916 } 00:13:38.916 ] 00:13:38.916 }' 00:13:38.916 14:13:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:38.916 14:13:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.174 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.174 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.174 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:39.174 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.174 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.487 [2024-11-27 14:13:16.506215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:39.487 BaseBdev1 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.487 [ 00:13:39.487 { 00:13:39.487 "name": "BaseBdev1", 00:13:39.487 "aliases": [ 00:13:39.487 "6930d635-4879-481d-a3f8-27ce7c7998b3" 00:13:39.487 ], 00:13:39.487 "product_name": "Malloc disk", 00:13:39.487 "block_size": 512, 00:13:39.487 "num_blocks": 65536, 00:13:39.487 "uuid": "6930d635-4879-481d-a3f8-27ce7c7998b3", 00:13:39.487 "assigned_rate_limits": { 00:13:39.487 "rw_ios_per_sec": 0, 00:13:39.487 "rw_mbytes_per_sec": 0, 00:13:39.487 "r_mbytes_per_sec": 0, 00:13:39.487 "w_mbytes_per_sec": 0 00:13:39.487 }, 00:13:39.487 "claimed": true, 00:13:39.487 "claim_type": "exclusive_write", 00:13:39.487 "zoned": false, 00:13:39.487 "supported_io_types": { 00:13:39.487 "read": true, 00:13:39.487 "write": true, 00:13:39.487 "unmap": true, 00:13:39.487 "flush": true, 00:13:39.487 "reset": true, 00:13:39.487 "nvme_admin": false, 00:13:39.487 "nvme_io": false, 00:13:39.487 "nvme_io_md": false, 00:13:39.487 "write_zeroes": true, 00:13:39.487 "zcopy": true, 00:13:39.487 "get_zone_info": false, 00:13:39.487 "zone_management": false, 00:13:39.487 "zone_append": false, 00:13:39.487 "compare": false, 00:13:39.487 "compare_and_write": false, 00:13:39.487 "abort": true, 00:13:39.487 "seek_hole": false, 00:13:39.487 "seek_data": false, 00:13:39.487 "copy": true, 00:13:39.487 "nvme_iov_md": false 00:13:39.487 }, 00:13:39.487 "memory_domains": [ 00:13:39.487 { 00:13:39.487 "dma_device_id": "system", 00:13:39.487 "dma_device_type": 1 00:13:39.487 }, 00:13:39.487 { 00:13:39.487 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.487 "dma_device_type": 2 00:13:39.487 } 00:13:39.487 ], 00:13:39.487 "driver_specific": {} 00:13:39.487 } 00:13:39.487 ] 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:39.487 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:39.487 "name": "Existed_Raid", 00:13:39.487 "uuid": "e29ab660-6483-4216-b611-e0e4fa3a808e", 00:13:39.487 "strip_size_kb": 0, 00:13:39.487 "state": "configuring", 00:13:39.487 "raid_level": "raid1", 00:13:39.487 "superblock": true, 00:13:39.487 "num_base_bdevs": 4, 00:13:39.487 "num_base_bdevs_discovered": 3, 00:13:39.487 "num_base_bdevs_operational": 4, 00:13:39.487 "base_bdevs_list": [ 00:13:39.487 { 00:13:39.487 "name": "BaseBdev1", 00:13:39.487 "uuid": "6930d635-4879-481d-a3f8-27ce7c7998b3", 00:13:39.487 "is_configured": true, 00:13:39.487 "data_offset": 2048, 00:13:39.487 "data_size": 63488 00:13:39.487 }, 00:13:39.487 { 00:13:39.487 "name": null, 00:13:39.487 "uuid": "8019adef-be0a-488d-a05c-9c770b8fd13f", 00:13:39.487 "is_configured": false, 00:13:39.487 "data_offset": 0, 00:13:39.487 "data_size": 63488 00:13:39.487 }, 00:13:39.487 { 00:13:39.487 "name": "BaseBdev3", 00:13:39.487 "uuid": "a7e85f48-1702-488c-846a-0500c8866abb", 00:13:39.487 "is_configured": true, 00:13:39.488 "data_offset": 2048, 00:13:39.488 "data_size": 63488 00:13:39.488 }, 00:13:39.488 { 00:13:39.488 "name": "BaseBdev4", 00:13:39.488 "uuid": "0bfdc89f-e1ee-46f9-bf34-511c73a37a79", 00:13:39.488 "is_configured": true, 00:13:39.488 "data_offset": 2048, 00:13:39.488 "data_size": 63488 00:13:39.488 } 00:13:39.488 ] 00:13:39.488 }' 00:13:39.488 14:13:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:39.488 14:13:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.055 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.055 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.055 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:40.055 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.055 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.055 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:13:40.055 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:13:40.055 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.055 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.055 [2024-11-27 14:13:17.086441] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:40.055 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.056 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:40.056 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.056 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.056 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.056 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.056 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:40.056 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.056 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.056 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.056 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.056 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.056 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.056 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.056 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.056 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.056 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.056 "name": "Existed_Raid", 00:13:40.056 "uuid": "e29ab660-6483-4216-b611-e0e4fa3a808e", 00:13:40.056 "strip_size_kb": 0, 00:13:40.056 "state": "configuring", 00:13:40.056 "raid_level": "raid1", 00:13:40.056 "superblock": true, 00:13:40.056 "num_base_bdevs": 4, 00:13:40.056 "num_base_bdevs_discovered": 2, 00:13:40.056 "num_base_bdevs_operational": 4, 00:13:40.056 "base_bdevs_list": [ 00:13:40.056 { 00:13:40.056 "name": "BaseBdev1", 00:13:40.056 "uuid": "6930d635-4879-481d-a3f8-27ce7c7998b3", 00:13:40.056 "is_configured": true, 00:13:40.056 "data_offset": 2048, 00:13:40.056 "data_size": 63488 00:13:40.056 }, 00:13:40.056 { 00:13:40.056 "name": null, 00:13:40.056 "uuid": "8019adef-be0a-488d-a05c-9c770b8fd13f", 00:13:40.056 "is_configured": false, 00:13:40.056 "data_offset": 0, 00:13:40.056 "data_size": 63488 00:13:40.056 }, 00:13:40.056 { 00:13:40.056 "name": null, 00:13:40.056 "uuid": "a7e85f48-1702-488c-846a-0500c8866abb", 00:13:40.056 "is_configured": false, 00:13:40.056 "data_offset": 0, 00:13:40.056 "data_size": 63488 00:13:40.056 }, 00:13:40.056 { 00:13:40.056 "name": "BaseBdev4", 00:13:40.056 "uuid": "0bfdc89f-e1ee-46f9-bf34-511c73a37a79", 00:13:40.056 "is_configured": true, 00:13:40.056 "data_offset": 2048, 00:13:40.056 "data_size": 63488 00:13:40.056 } 00:13:40.056 ] 00:13:40.056 }' 00:13:40.056 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.056 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.624 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.624 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:40.624 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.624 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.624 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.624 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:13:40.624 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:40.624 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.624 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.624 [2024-11-27 14:13:17.658564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:40.624 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.624 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:40.624 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:40.624 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:40.624 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:40.624 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:40.624 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:40.624 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:40.624 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:40.624 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:40.624 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:40.624 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.624 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.624 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:40.624 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.624 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:40.624 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:40.624 "name": "Existed_Raid", 00:13:40.624 "uuid": "e29ab660-6483-4216-b611-e0e4fa3a808e", 00:13:40.624 "strip_size_kb": 0, 00:13:40.624 "state": "configuring", 00:13:40.624 "raid_level": "raid1", 00:13:40.624 "superblock": true, 00:13:40.624 "num_base_bdevs": 4, 00:13:40.624 "num_base_bdevs_discovered": 3, 00:13:40.624 "num_base_bdevs_operational": 4, 00:13:40.624 "base_bdevs_list": [ 00:13:40.624 { 00:13:40.624 "name": "BaseBdev1", 00:13:40.624 "uuid": "6930d635-4879-481d-a3f8-27ce7c7998b3", 00:13:40.624 "is_configured": true, 00:13:40.624 "data_offset": 2048, 00:13:40.624 "data_size": 63488 00:13:40.624 }, 00:13:40.624 { 00:13:40.624 "name": null, 00:13:40.624 "uuid": "8019adef-be0a-488d-a05c-9c770b8fd13f", 00:13:40.624 "is_configured": false, 00:13:40.624 "data_offset": 0, 00:13:40.624 "data_size": 63488 00:13:40.624 }, 00:13:40.624 { 00:13:40.624 "name": "BaseBdev3", 00:13:40.624 "uuid": "a7e85f48-1702-488c-846a-0500c8866abb", 00:13:40.624 "is_configured": true, 00:13:40.624 "data_offset": 2048, 00:13:40.624 "data_size": 63488 00:13:40.624 }, 00:13:40.624 { 00:13:40.624 "name": "BaseBdev4", 00:13:40.624 "uuid": "0bfdc89f-e1ee-46f9-bf34-511c73a37a79", 00:13:40.624 "is_configured": true, 00:13:40.624 "data_offset": 2048, 00:13:40.624 "data_size": 63488 00:13:40.624 } 00:13:40.624 ] 00:13:40.624 }' 00:13:40.624 14:13:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:40.624 14:13:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:40.883 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:40.883 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:40.883 14:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:40.883 14:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.141 14:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.142 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:13:41.142 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:13:41.142 14:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.142 14:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.142 [2024-11-27 14:13:18.186863] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:41.142 14:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.142 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:41.142 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.142 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.142 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.142 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.142 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:41.142 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.142 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.142 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.142 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.142 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.142 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.142 14:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.142 14:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.142 14:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.142 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.142 "name": "Existed_Raid", 00:13:41.142 "uuid": "e29ab660-6483-4216-b611-e0e4fa3a808e", 00:13:41.142 "strip_size_kb": 0, 00:13:41.142 "state": "configuring", 00:13:41.142 "raid_level": "raid1", 00:13:41.142 "superblock": true, 00:13:41.142 "num_base_bdevs": 4, 00:13:41.142 "num_base_bdevs_discovered": 2, 00:13:41.142 "num_base_bdevs_operational": 4, 00:13:41.142 "base_bdevs_list": [ 00:13:41.142 { 00:13:41.142 "name": null, 00:13:41.142 "uuid": "6930d635-4879-481d-a3f8-27ce7c7998b3", 00:13:41.142 "is_configured": false, 00:13:41.142 "data_offset": 0, 00:13:41.142 "data_size": 63488 00:13:41.142 }, 00:13:41.142 { 00:13:41.142 "name": null, 00:13:41.142 "uuid": "8019adef-be0a-488d-a05c-9c770b8fd13f", 00:13:41.142 "is_configured": false, 00:13:41.142 "data_offset": 0, 00:13:41.142 "data_size": 63488 00:13:41.142 }, 00:13:41.142 { 00:13:41.142 "name": "BaseBdev3", 00:13:41.142 "uuid": "a7e85f48-1702-488c-846a-0500c8866abb", 00:13:41.142 "is_configured": true, 00:13:41.142 "data_offset": 2048, 00:13:41.142 "data_size": 63488 00:13:41.142 }, 00:13:41.142 { 00:13:41.142 "name": "BaseBdev4", 00:13:41.142 "uuid": "0bfdc89f-e1ee-46f9-bf34-511c73a37a79", 00:13:41.142 "is_configured": true, 00:13:41.142 "data_offset": 2048, 00:13:41.142 "data_size": 63488 00:13:41.142 } 00:13:41.142 ] 00:13:41.142 }' 00:13:41.142 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.142 14:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.710 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.710 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:41.710 14:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.710 14:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.710 14:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.710 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:13:41.710 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:41.710 14:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.710 14:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.710 [2024-11-27 14:13:18.808519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:41.710 14:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.710 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:13:41.710 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:41.710 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:41.710 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:41.710 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:41.710 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:41.710 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:41.710 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:41.710 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:41.710 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:41.710 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:41.710 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:41.710 14:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:41.710 14:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:41.710 14:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:41.710 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:41.710 "name": "Existed_Raid", 00:13:41.710 "uuid": "e29ab660-6483-4216-b611-e0e4fa3a808e", 00:13:41.710 "strip_size_kb": 0, 00:13:41.710 "state": "configuring", 00:13:41.710 "raid_level": "raid1", 00:13:41.710 "superblock": true, 00:13:41.710 "num_base_bdevs": 4, 00:13:41.710 "num_base_bdevs_discovered": 3, 00:13:41.710 "num_base_bdevs_operational": 4, 00:13:41.710 "base_bdevs_list": [ 00:13:41.710 { 00:13:41.710 "name": null, 00:13:41.710 "uuid": "6930d635-4879-481d-a3f8-27ce7c7998b3", 00:13:41.710 "is_configured": false, 00:13:41.710 "data_offset": 0, 00:13:41.710 "data_size": 63488 00:13:41.710 }, 00:13:41.710 { 00:13:41.710 "name": "BaseBdev2", 00:13:41.710 "uuid": "8019adef-be0a-488d-a05c-9c770b8fd13f", 00:13:41.710 "is_configured": true, 00:13:41.710 "data_offset": 2048, 00:13:41.710 "data_size": 63488 00:13:41.710 }, 00:13:41.710 { 00:13:41.710 "name": "BaseBdev3", 00:13:41.710 "uuid": "a7e85f48-1702-488c-846a-0500c8866abb", 00:13:41.710 "is_configured": true, 00:13:41.710 "data_offset": 2048, 00:13:41.710 "data_size": 63488 00:13:41.710 }, 00:13:41.710 { 00:13:41.710 "name": "BaseBdev4", 00:13:41.710 "uuid": "0bfdc89f-e1ee-46f9-bf34-511c73a37a79", 00:13:41.710 "is_configured": true, 00:13:41.710 "data_offset": 2048, 00:13:41.710 "data_size": 63488 00:13:41.710 } 00:13:41.710 ] 00:13:41.710 }' 00:13:41.710 14:13:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:41.710 14:13:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 6930d635-4879-481d-a3f8-27ce7c7998b3 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.304 [2024-11-27 14:13:19.468992] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:42.304 [2024-11-27 14:13:19.469332] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:42.304 [2024-11-27 14:13:19.469355] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:42.304 [2024-11-27 14:13:19.469704] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:13:42.304 NewBaseBdev 00:13:42.304 [2024-11-27 14:13:19.469929] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:42.304 [2024-11-27 14:13:19.469945] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:13:42.304 [2024-11-27 14:13:19.470131] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.304 [ 00:13:42.304 { 00:13:42.304 "name": "NewBaseBdev", 00:13:42.304 "aliases": [ 00:13:42.304 "6930d635-4879-481d-a3f8-27ce7c7998b3" 00:13:42.304 ], 00:13:42.304 "product_name": "Malloc disk", 00:13:42.304 "block_size": 512, 00:13:42.304 "num_blocks": 65536, 00:13:42.304 "uuid": "6930d635-4879-481d-a3f8-27ce7c7998b3", 00:13:42.304 "assigned_rate_limits": { 00:13:42.304 "rw_ios_per_sec": 0, 00:13:42.304 "rw_mbytes_per_sec": 0, 00:13:42.304 "r_mbytes_per_sec": 0, 00:13:42.304 "w_mbytes_per_sec": 0 00:13:42.304 }, 00:13:42.304 "claimed": true, 00:13:42.304 "claim_type": "exclusive_write", 00:13:42.304 "zoned": false, 00:13:42.304 "supported_io_types": { 00:13:42.304 "read": true, 00:13:42.304 "write": true, 00:13:42.304 "unmap": true, 00:13:42.304 "flush": true, 00:13:42.304 "reset": true, 00:13:42.304 "nvme_admin": false, 00:13:42.304 "nvme_io": false, 00:13:42.304 "nvme_io_md": false, 00:13:42.304 "write_zeroes": true, 00:13:42.304 "zcopy": true, 00:13:42.304 "get_zone_info": false, 00:13:42.304 "zone_management": false, 00:13:42.304 "zone_append": false, 00:13:42.304 "compare": false, 00:13:42.304 "compare_and_write": false, 00:13:42.304 "abort": true, 00:13:42.304 "seek_hole": false, 00:13:42.304 "seek_data": false, 00:13:42.304 "copy": true, 00:13:42.304 "nvme_iov_md": false 00:13:42.304 }, 00:13:42.304 "memory_domains": [ 00:13:42.304 { 00:13:42.304 "dma_device_id": "system", 00:13:42.304 "dma_device_type": 1 00:13:42.304 }, 00:13:42.304 { 00:13:42.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.304 "dma_device_type": 2 00:13:42.304 } 00:13:42.304 ], 00:13:42.304 "driver_specific": {} 00:13:42.304 } 00:13:42.304 ] 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.304 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:42.304 "name": "Existed_Raid", 00:13:42.304 "uuid": "e29ab660-6483-4216-b611-e0e4fa3a808e", 00:13:42.304 "strip_size_kb": 0, 00:13:42.304 "state": "online", 00:13:42.304 "raid_level": "raid1", 00:13:42.304 "superblock": true, 00:13:42.304 "num_base_bdevs": 4, 00:13:42.304 "num_base_bdevs_discovered": 4, 00:13:42.304 "num_base_bdevs_operational": 4, 00:13:42.304 "base_bdevs_list": [ 00:13:42.304 { 00:13:42.304 "name": "NewBaseBdev", 00:13:42.304 "uuid": "6930d635-4879-481d-a3f8-27ce7c7998b3", 00:13:42.304 "is_configured": true, 00:13:42.304 "data_offset": 2048, 00:13:42.304 "data_size": 63488 00:13:42.304 }, 00:13:42.304 { 00:13:42.304 "name": "BaseBdev2", 00:13:42.304 "uuid": "8019adef-be0a-488d-a05c-9c770b8fd13f", 00:13:42.304 "is_configured": true, 00:13:42.304 "data_offset": 2048, 00:13:42.304 "data_size": 63488 00:13:42.304 }, 00:13:42.304 { 00:13:42.304 "name": "BaseBdev3", 00:13:42.304 "uuid": "a7e85f48-1702-488c-846a-0500c8866abb", 00:13:42.304 "is_configured": true, 00:13:42.304 "data_offset": 2048, 00:13:42.304 "data_size": 63488 00:13:42.304 }, 00:13:42.304 { 00:13:42.304 "name": "BaseBdev4", 00:13:42.304 "uuid": "0bfdc89f-e1ee-46f9-bf34-511c73a37a79", 00:13:42.304 "is_configured": true, 00:13:42.304 "data_offset": 2048, 00:13:42.304 "data_size": 63488 00:13:42.305 } 00:13:42.305 ] 00:13:42.305 }' 00:13:42.305 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:42.305 14:13:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.873 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:13:42.873 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:13:42.873 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:42.873 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:42.873 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:13:42.873 14:13:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:42.873 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:13:42.873 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:42.873 14:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:42.873 14:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:42.873 [2024-11-27 14:13:20.009597] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:42.873 14:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:42.873 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:42.873 "name": "Existed_Raid", 00:13:42.873 "aliases": [ 00:13:42.873 "e29ab660-6483-4216-b611-e0e4fa3a808e" 00:13:42.873 ], 00:13:42.873 "product_name": "Raid Volume", 00:13:42.873 "block_size": 512, 00:13:42.873 "num_blocks": 63488, 00:13:42.873 "uuid": "e29ab660-6483-4216-b611-e0e4fa3a808e", 00:13:42.873 "assigned_rate_limits": { 00:13:42.873 "rw_ios_per_sec": 0, 00:13:42.873 "rw_mbytes_per_sec": 0, 00:13:42.873 "r_mbytes_per_sec": 0, 00:13:42.873 "w_mbytes_per_sec": 0 00:13:42.873 }, 00:13:42.873 "claimed": false, 00:13:42.873 "zoned": false, 00:13:42.873 "supported_io_types": { 00:13:42.873 "read": true, 00:13:42.873 "write": true, 00:13:42.873 "unmap": false, 00:13:42.873 "flush": false, 00:13:42.873 "reset": true, 00:13:42.873 "nvme_admin": false, 00:13:42.873 "nvme_io": false, 00:13:42.873 "nvme_io_md": false, 00:13:42.873 "write_zeroes": true, 00:13:42.873 "zcopy": false, 00:13:42.873 "get_zone_info": false, 00:13:42.873 "zone_management": false, 00:13:42.873 "zone_append": false, 00:13:42.873 "compare": false, 00:13:42.873 "compare_and_write": false, 00:13:42.873 "abort": false, 00:13:42.873 "seek_hole": false, 00:13:42.873 "seek_data": false, 00:13:42.873 "copy": false, 00:13:42.873 "nvme_iov_md": false 00:13:42.873 }, 00:13:42.873 "memory_domains": [ 00:13:42.873 { 00:13:42.873 "dma_device_id": "system", 00:13:42.873 "dma_device_type": 1 00:13:42.873 }, 00:13:42.873 { 00:13:42.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.873 "dma_device_type": 2 00:13:42.873 }, 00:13:42.873 { 00:13:42.873 "dma_device_id": "system", 00:13:42.873 "dma_device_type": 1 00:13:42.873 }, 00:13:42.873 { 00:13:42.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.873 "dma_device_type": 2 00:13:42.873 }, 00:13:42.873 { 00:13:42.873 "dma_device_id": "system", 00:13:42.873 "dma_device_type": 1 00:13:42.873 }, 00:13:42.873 { 00:13:42.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.873 "dma_device_type": 2 00:13:42.873 }, 00:13:42.873 { 00:13:42.873 "dma_device_id": "system", 00:13:42.873 "dma_device_type": 1 00:13:42.873 }, 00:13:42.873 { 00:13:42.873 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:42.873 "dma_device_type": 2 00:13:42.873 } 00:13:42.873 ], 00:13:42.873 "driver_specific": { 00:13:42.873 "raid": { 00:13:42.873 "uuid": "e29ab660-6483-4216-b611-e0e4fa3a808e", 00:13:42.873 "strip_size_kb": 0, 00:13:42.873 "state": "online", 00:13:42.873 "raid_level": "raid1", 00:13:42.873 "superblock": true, 00:13:42.873 "num_base_bdevs": 4, 00:13:42.873 "num_base_bdevs_discovered": 4, 00:13:42.873 "num_base_bdevs_operational": 4, 00:13:42.873 "base_bdevs_list": [ 00:13:42.873 { 00:13:42.873 "name": "NewBaseBdev", 00:13:42.873 "uuid": "6930d635-4879-481d-a3f8-27ce7c7998b3", 00:13:42.873 "is_configured": true, 00:13:42.873 "data_offset": 2048, 00:13:42.873 "data_size": 63488 00:13:42.873 }, 00:13:42.873 { 00:13:42.873 "name": "BaseBdev2", 00:13:42.873 "uuid": "8019adef-be0a-488d-a05c-9c770b8fd13f", 00:13:42.873 "is_configured": true, 00:13:42.873 "data_offset": 2048, 00:13:42.873 "data_size": 63488 00:13:42.873 }, 00:13:42.873 { 00:13:42.873 "name": "BaseBdev3", 00:13:42.873 "uuid": "a7e85f48-1702-488c-846a-0500c8866abb", 00:13:42.873 "is_configured": true, 00:13:42.873 "data_offset": 2048, 00:13:42.873 "data_size": 63488 00:13:42.873 }, 00:13:42.873 { 00:13:42.873 "name": "BaseBdev4", 00:13:42.873 "uuid": "0bfdc89f-e1ee-46f9-bf34-511c73a37a79", 00:13:42.873 "is_configured": true, 00:13:42.873 "data_offset": 2048, 00:13:42.873 "data_size": 63488 00:13:42.873 } 00:13:42.873 ] 00:13:42.873 } 00:13:42.873 } 00:13:42.873 }' 00:13:42.873 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:42.873 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:13:42.873 BaseBdev2 00:13:42.873 BaseBdev3 00:13:42.873 BaseBdev4' 00:13:42.873 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:43.133 [2024-11-27 14:13:20.369315] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:43.133 [2024-11-27 14:13:20.369349] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:43.133 [2024-11-27 14:13:20.369469] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:43.133 [2024-11-27 14:13:20.369880] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:43.133 [2024-11-27 14:13:20.369912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 73910 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 73910 ']' 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 73910 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:43.133 14:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73910 00:13:43.393 14:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:43.393 14:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:43.393 killing process with pid 73910 00:13:43.393 14:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73910' 00:13:43.393 14:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 73910 00:13:43.393 [2024-11-27 14:13:20.410917] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:43.393 14:13:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 73910 00:13:43.652 [2024-11-27 14:13:20.753668] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:44.588 14:13:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:13:44.588 00:13:44.588 real 0m12.755s 00:13:44.588 user 0m21.201s 00:13:44.589 sys 0m1.729s 00:13:44.589 14:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:44.589 14:13:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:44.589 ************************************ 00:13:44.589 END TEST raid_state_function_test_sb 00:13:44.589 ************************************ 00:13:44.589 14:13:21 bdev_raid -- bdev/bdev_raid.sh@970 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:13:44.589 14:13:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:44.589 14:13:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:44.589 14:13:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:44.589 ************************************ 00:13:44.589 START TEST raid_superblock_test 00:13:44.589 ************************************ 00:13:44.589 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 4 00:13:44.589 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:13:44.589 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:13:44.589 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:13:44.589 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:13:44.589 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:13:44.589 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:13:44.589 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:13:44.589 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:13:44.589 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:13:44.589 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:13:44.589 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:13:44.589 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:13:44.589 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:13:44.589 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:13:44.589 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:13:44.589 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=74590 00:13:44.589 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 74590 00:13:44.589 14:13:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:13:44.589 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 74590 ']' 00:13:44.589 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:44.589 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:44.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:44.589 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:44.589 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:44.589 14:13:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:44.848 [2024-11-27 14:13:21.948224] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:13:44.848 [2024-11-27 14:13:21.948408] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74590 ] 00:13:45.108 [2024-11-27 14:13:22.133160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:45.108 [2024-11-27 14:13:22.259391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:45.367 [2024-11-27 14:13:22.462257] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.367 [2024-11-27 14:13:22.462345] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:45.936 14:13:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:45.936 14:13:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:13:45.936 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:13:45.936 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:45.936 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:13:45.936 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:13:45.936 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:45.936 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:45.936 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:45.936 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:45.936 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:13:45.936 14:13:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.936 14:13:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.936 malloc1 00:13:45.936 14:13:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.936 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:45.936 14:13:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.936 14:13:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.936 [2024-11-27 14:13:22.987727] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:45.936 [2024-11-27 14:13:22.987825] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.936 [2024-11-27 14:13:22.987859] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:13:45.936 [2024-11-27 14:13:22.987874] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.936 [2024-11-27 14:13:22.990763] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.936 [2024-11-27 14:13:22.990820] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:45.936 pt1 00:13:45.936 14:13:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.937 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:45.937 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:45.937 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:13:45.937 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:13:45.937 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:45.937 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:45.937 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:45.937 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:45.937 14:13:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:13:45.937 14:13:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.937 14:13:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.937 malloc2 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.937 [2024-11-27 14:13:23.043544] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:45.937 [2024-11-27 14:13:23.043604] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.937 [2024-11-27 14:13:23.043637] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:13:45.937 [2024-11-27 14:13:23.043651] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.937 [2024-11-27 14:13:23.046334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.937 [2024-11-27 14:13:23.046371] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:45.937 pt2 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.937 malloc3 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.937 [2024-11-27 14:13:23.109027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:45.937 [2024-11-27 14:13:23.109098] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.937 [2024-11-27 14:13:23.109128] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:13:45.937 [2024-11-27 14:13:23.109142] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.937 [2024-11-27 14:13:23.111885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.937 [2024-11-27 14:13:23.111921] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:45.937 pt3 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.937 malloc4 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.937 [2024-11-27 14:13:23.164730] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:45.937 [2024-11-27 14:13:23.164815] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:45.937 [2024-11-27 14:13:23.164882] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:13:45.937 [2024-11-27 14:13:23.164896] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:45.937 [2024-11-27 14:13:23.167653] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:45.937 [2024-11-27 14:13:23.167691] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:45.937 pt4 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.937 [2024-11-27 14:13:23.176756] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:45.937 [2024-11-27 14:13:23.179237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:45.937 [2024-11-27 14:13:23.179324] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:45.937 [2024-11-27 14:13:23.179460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:45.937 [2024-11-27 14:13:23.179713] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:13:45.937 [2024-11-27 14:13:23.179742] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:45.937 [2024-11-27 14:13:23.180086] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:13:45.937 [2024-11-27 14:13:23.180365] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:13:45.937 [2024-11-27 14:13:23.180396] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:13:45.937 [2024-11-27 14:13:23.180595] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:45.937 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.196 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:46.197 "name": "raid_bdev1", 00:13:46.197 "uuid": "b2a22b6a-2eec-4ca4-b57d-6571381e3ace", 00:13:46.197 "strip_size_kb": 0, 00:13:46.197 "state": "online", 00:13:46.197 "raid_level": "raid1", 00:13:46.197 "superblock": true, 00:13:46.197 "num_base_bdevs": 4, 00:13:46.197 "num_base_bdevs_discovered": 4, 00:13:46.197 "num_base_bdevs_operational": 4, 00:13:46.197 "base_bdevs_list": [ 00:13:46.197 { 00:13:46.197 "name": "pt1", 00:13:46.197 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:46.197 "is_configured": true, 00:13:46.197 "data_offset": 2048, 00:13:46.197 "data_size": 63488 00:13:46.197 }, 00:13:46.197 { 00:13:46.197 "name": "pt2", 00:13:46.197 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:46.197 "is_configured": true, 00:13:46.197 "data_offset": 2048, 00:13:46.197 "data_size": 63488 00:13:46.197 }, 00:13:46.197 { 00:13:46.197 "name": "pt3", 00:13:46.197 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:46.197 "is_configured": true, 00:13:46.197 "data_offset": 2048, 00:13:46.197 "data_size": 63488 00:13:46.197 }, 00:13:46.197 { 00:13:46.197 "name": "pt4", 00:13:46.197 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:46.197 "is_configured": true, 00:13:46.197 "data_offset": 2048, 00:13:46.197 "data_size": 63488 00:13:46.197 } 00:13:46.197 ] 00:13:46.197 }' 00:13:46.197 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:46.197 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.456 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:13:46.456 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:46.456 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:46.456 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:46.456 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:46.456 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:46.456 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:46.456 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:46.456 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.456 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.456 [2024-11-27 14:13:23.705329] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:46.456 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.715 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:46.715 "name": "raid_bdev1", 00:13:46.715 "aliases": [ 00:13:46.715 "b2a22b6a-2eec-4ca4-b57d-6571381e3ace" 00:13:46.715 ], 00:13:46.715 "product_name": "Raid Volume", 00:13:46.715 "block_size": 512, 00:13:46.715 "num_blocks": 63488, 00:13:46.715 "uuid": "b2a22b6a-2eec-4ca4-b57d-6571381e3ace", 00:13:46.715 "assigned_rate_limits": { 00:13:46.715 "rw_ios_per_sec": 0, 00:13:46.715 "rw_mbytes_per_sec": 0, 00:13:46.715 "r_mbytes_per_sec": 0, 00:13:46.715 "w_mbytes_per_sec": 0 00:13:46.715 }, 00:13:46.715 "claimed": false, 00:13:46.715 "zoned": false, 00:13:46.715 "supported_io_types": { 00:13:46.715 "read": true, 00:13:46.715 "write": true, 00:13:46.715 "unmap": false, 00:13:46.715 "flush": false, 00:13:46.715 "reset": true, 00:13:46.715 "nvme_admin": false, 00:13:46.715 "nvme_io": false, 00:13:46.715 "nvme_io_md": false, 00:13:46.715 "write_zeroes": true, 00:13:46.715 "zcopy": false, 00:13:46.715 "get_zone_info": false, 00:13:46.715 "zone_management": false, 00:13:46.715 "zone_append": false, 00:13:46.715 "compare": false, 00:13:46.715 "compare_and_write": false, 00:13:46.715 "abort": false, 00:13:46.715 "seek_hole": false, 00:13:46.715 "seek_data": false, 00:13:46.715 "copy": false, 00:13:46.715 "nvme_iov_md": false 00:13:46.715 }, 00:13:46.715 "memory_domains": [ 00:13:46.715 { 00:13:46.715 "dma_device_id": "system", 00:13:46.715 "dma_device_type": 1 00:13:46.715 }, 00:13:46.715 { 00:13:46.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.715 "dma_device_type": 2 00:13:46.715 }, 00:13:46.715 { 00:13:46.715 "dma_device_id": "system", 00:13:46.715 "dma_device_type": 1 00:13:46.715 }, 00:13:46.715 { 00:13:46.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.715 "dma_device_type": 2 00:13:46.715 }, 00:13:46.715 { 00:13:46.715 "dma_device_id": "system", 00:13:46.715 "dma_device_type": 1 00:13:46.715 }, 00:13:46.715 { 00:13:46.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.715 "dma_device_type": 2 00:13:46.715 }, 00:13:46.715 { 00:13:46.715 "dma_device_id": "system", 00:13:46.715 "dma_device_type": 1 00:13:46.715 }, 00:13:46.715 { 00:13:46.715 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:46.715 "dma_device_type": 2 00:13:46.715 } 00:13:46.715 ], 00:13:46.715 "driver_specific": { 00:13:46.715 "raid": { 00:13:46.715 "uuid": "b2a22b6a-2eec-4ca4-b57d-6571381e3ace", 00:13:46.715 "strip_size_kb": 0, 00:13:46.715 "state": "online", 00:13:46.715 "raid_level": "raid1", 00:13:46.715 "superblock": true, 00:13:46.715 "num_base_bdevs": 4, 00:13:46.715 "num_base_bdevs_discovered": 4, 00:13:46.715 "num_base_bdevs_operational": 4, 00:13:46.715 "base_bdevs_list": [ 00:13:46.715 { 00:13:46.715 "name": "pt1", 00:13:46.715 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:46.715 "is_configured": true, 00:13:46.715 "data_offset": 2048, 00:13:46.715 "data_size": 63488 00:13:46.715 }, 00:13:46.715 { 00:13:46.715 "name": "pt2", 00:13:46.715 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:46.715 "is_configured": true, 00:13:46.715 "data_offset": 2048, 00:13:46.715 "data_size": 63488 00:13:46.715 }, 00:13:46.715 { 00:13:46.715 "name": "pt3", 00:13:46.715 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:46.715 "is_configured": true, 00:13:46.715 "data_offset": 2048, 00:13:46.715 "data_size": 63488 00:13:46.715 }, 00:13:46.715 { 00:13:46.715 "name": "pt4", 00:13:46.715 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:46.715 "is_configured": true, 00:13:46.715 "data_offset": 2048, 00:13:46.715 "data_size": 63488 00:13:46.715 } 00:13:46.715 ] 00:13:46.715 } 00:13:46.715 } 00:13:46.715 }' 00:13:46.715 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:46.715 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:46.715 pt2 00:13:46.715 pt3 00:13:46.715 pt4' 00:13:46.715 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.715 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:46.715 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:46.715 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:46.715 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.715 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.715 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.715 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.715 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:46.715 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:46.715 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:46.715 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:46.715 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.715 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.715 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.715 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.715 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:46.715 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:46.715 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:46.715 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.715 14:13:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:46.715 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.715 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.715 14:13:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.974 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:46.974 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:46.974 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:46.974 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:46.974 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:46.974 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.974 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.974 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.974 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:46.974 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:46.974 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:46.974 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:13:46.974 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.974 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.974 [2024-11-27 14:13:24.073446] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:46.974 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.974 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=b2a22b6a-2eec-4ca4-b57d-6571381e3ace 00:13:46.974 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z b2a22b6a-2eec-4ca4-b57d-6571381e3ace ']' 00:13:46.974 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:46.974 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.974 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.974 [2024-11-27 14:13:24.121032] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:46.974 [2024-11-27 14:13:24.121231] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:46.974 [2024-11-27 14:13:24.121429] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:46.974 [2024-11-27 14:13:24.121646] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:46.974 [2024-11-27 14:13:24.121830] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:13:46.974 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.974 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:46.974 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.974 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.974 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:13:46.974 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.974 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:13:46.974 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:13:46.974 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:46.974 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:13:46.974 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.974 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.974 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.974 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:46.975 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:13:46.975 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.975 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.975 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.975 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:46.975 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:13:46.975 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.975 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.975 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.975 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:13:46.975 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:13:46.975 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.975 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:46.975 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:46.975 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:13:46.975 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:46.975 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:46.975 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.234 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.234 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:13:47.234 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.235 [2024-11-27 14:13:24.273077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:47.235 [2024-11-27 14:13:24.275627] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:47.235 [2024-11-27 14:13:24.275881] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:47.235 [2024-11-27 14:13:24.275956] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:47.235 [2024-11-27 14:13:24.276030] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:47.235 [2024-11-27 14:13:24.276103] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:47.235 [2024-11-27 14:13:24.276136] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:47.235 [2024-11-27 14:13:24.276165] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:47.235 [2024-11-27 14:13:24.276186] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:47.235 [2024-11-27 14:13:24.276203] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:13:47.235 request: 00:13:47.235 { 00:13:47.235 "name": "raid_bdev1", 00:13:47.235 "raid_level": "raid1", 00:13:47.235 "base_bdevs": [ 00:13:47.235 "malloc1", 00:13:47.235 "malloc2", 00:13:47.235 "malloc3", 00:13:47.235 "malloc4" 00:13:47.235 ], 00:13:47.235 "superblock": false, 00:13:47.235 "method": "bdev_raid_create", 00:13:47.235 "req_id": 1 00:13:47.235 } 00:13:47.235 Got JSON-RPC error response 00:13:47.235 response: 00:13:47.235 { 00:13:47.235 "code": -17, 00:13:47.235 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:47.235 } 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.235 [2024-11-27 14:13:24.345081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:47.235 [2024-11-27 14:13:24.345362] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.235 [2024-11-27 14:13:24.345428] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:47.235 [2024-11-27 14:13:24.345540] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.235 [2024-11-27 14:13:24.348520] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.235 [2024-11-27 14:13:24.348703] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:47.235 [2024-11-27 14:13:24.348830] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:47.235 [2024-11-27 14:13:24.348905] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:47.235 pt1 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.235 "name": "raid_bdev1", 00:13:47.235 "uuid": "b2a22b6a-2eec-4ca4-b57d-6571381e3ace", 00:13:47.235 "strip_size_kb": 0, 00:13:47.235 "state": "configuring", 00:13:47.235 "raid_level": "raid1", 00:13:47.235 "superblock": true, 00:13:47.235 "num_base_bdevs": 4, 00:13:47.235 "num_base_bdevs_discovered": 1, 00:13:47.235 "num_base_bdevs_operational": 4, 00:13:47.235 "base_bdevs_list": [ 00:13:47.235 { 00:13:47.235 "name": "pt1", 00:13:47.235 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:47.235 "is_configured": true, 00:13:47.235 "data_offset": 2048, 00:13:47.235 "data_size": 63488 00:13:47.235 }, 00:13:47.235 { 00:13:47.235 "name": null, 00:13:47.235 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:47.235 "is_configured": false, 00:13:47.235 "data_offset": 2048, 00:13:47.235 "data_size": 63488 00:13:47.235 }, 00:13:47.235 { 00:13:47.235 "name": null, 00:13:47.235 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:47.235 "is_configured": false, 00:13:47.235 "data_offset": 2048, 00:13:47.235 "data_size": 63488 00:13:47.235 }, 00:13:47.235 { 00:13:47.235 "name": null, 00:13:47.235 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:47.235 "is_configured": false, 00:13:47.235 "data_offset": 2048, 00:13:47.235 "data_size": 63488 00:13:47.235 } 00:13:47.235 ] 00:13:47.235 }' 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.235 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.804 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:13:47.804 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:47.804 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.804 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.804 [2024-11-27 14:13:24.905306] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:47.804 [2024-11-27 14:13:24.905547] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.804 [2024-11-27 14:13:24.905692] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:13:47.804 [2024-11-27 14:13:24.905848] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.804 [2024-11-27 14:13:24.906458] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.804 [2024-11-27 14:13:24.906650] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:47.804 [2024-11-27 14:13:24.906886] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:47.804 [2024-11-27 14:13:24.906935] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:47.804 pt2 00:13:47.804 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.804 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:13:47.804 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.804 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.804 [2024-11-27 14:13:24.913294] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:47.804 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.804 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:13:47.804 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:47.804 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:47.804 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:47.804 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:47.804 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:47.804 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:47.804 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:47.804 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:47.804 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:47.804 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:47.804 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:47.804 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.804 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.804 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:47.804 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:47.804 "name": "raid_bdev1", 00:13:47.804 "uuid": "b2a22b6a-2eec-4ca4-b57d-6571381e3ace", 00:13:47.804 "strip_size_kb": 0, 00:13:47.804 "state": "configuring", 00:13:47.804 "raid_level": "raid1", 00:13:47.804 "superblock": true, 00:13:47.804 "num_base_bdevs": 4, 00:13:47.804 "num_base_bdevs_discovered": 1, 00:13:47.804 "num_base_bdevs_operational": 4, 00:13:47.804 "base_bdevs_list": [ 00:13:47.804 { 00:13:47.804 "name": "pt1", 00:13:47.804 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:47.804 "is_configured": true, 00:13:47.804 "data_offset": 2048, 00:13:47.804 "data_size": 63488 00:13:47.804 }, 00:13:47.804 { 00:13:47.804 "name": null, 00:13:47.804 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:47.804 "is_configured": false, 00:13:47.804 "data_offset": 0, 00:13:47.804 "data_size": 63488 00:13:47.804 }, 00:13:47.804 { 00:13:47.804 "name": null, 00:13:47.804 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:47.804 "is_configured": false, 00:13:47.804 "data_offset": 2048, 00:13:47.804 "data_size": 63488 00:13:47.804 }, 00:13:47.804 { 00:13:47.804 "name": null, 00:13:47.804 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:47.804 "is_configured": false, 00:13:47.804 "data_offset": 2048, 00:13:47.804 "data_size": 63488 00:13:47.804 } 00:13:47.804 ] 00:13:47.804 }' 00:13:47.804 14:13:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:47.804 14:13:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.374 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:13:48.374 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:48.374 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:48.374 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.374 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.374 [2024-11-27 14:13:25.429480] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:48.374 [2024-11-27 14:13:25.429568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.374 [2024-11-27 14:13:25.429597] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:13:48.374 [2024-11-27 14:13:25.429611] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.374 [2024-11-27 14:13:25.430224] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.374 [2024-11-27 14:13:25.430247] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:48.374 [2024-11-27 14:13:25.430345] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:48.374 [2024-11-27 14:13:25.430374] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:48.374 pt2 00:13:48.374 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.374 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:48.374 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:48.374 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:48.374 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.374 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.374 [2024-11-27 14:13:25.441484] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:48.374 [2024-11-27 14:13:25.441567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.374 [2024-11-27 14:13:25.441596] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:13:48.374 [2024-11-27 14:13:25.441609] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.374 [2024-11-27 14:13:25.442195] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.374 [2024-11-27 14:13:25.442223] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:48.374 [2024-11-27 14:13:25.442353] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:48.374 [2024-11-27 14:13:25.442383] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:48.374 pt3 00:13:48.374 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.374 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:48.374 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:48.374 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:48.374 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.374 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.374 [2024-11-27 14:13:25.449444] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:48.374 [2024-11-27 14:13:25.449514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.374 [2024-11-27 14:13:25.449542] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:13:48.374 [2024-11-27 14:13:25.449555] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.374 [2024-11-27 14:13:25.450127] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.374 [2024-11-27 14:13:25.450176] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:48.374 [2024-11-27 14:13:25.450305] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:48.374 [2024-11-27 14:13:25.450342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:48.375 [2024-11-27 14:13:25.450526] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:13:48.375 [2024-11-27 14:13:25.450542] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:48.375 [2024-11-27 14:13:25.450915] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:13:48.375 [2024-11-27 14:13:25.451114] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:13:48.375 [2024-11-27 14:13:25.451135] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:13:48.375 [2024-11-27 14:13:25.451332] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:48.375 pt4 00:13:48.375 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.375 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:13:48.375 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:13:48.375 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:48.375 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:48.375 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:48.375 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:48.375 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:48.375 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:48.375 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:48.375 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:48.375 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:48.375 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:48.375 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:48.375 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.375 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.375 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.375 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.375 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:48.375 "name": "raid_bdev1", 00:13:48.375 "uuid": "b2a22b6a-2eec-4ca4-b57d-6571381e3ace", 00:13:48.375 "strip_size_kb": 0, 00:13:48.375 "state": "online", 00:13:48.375 "raid_level": "raid1", 00:13:48.375 "superblock": true, 00:13:48.375 "num_base_bdevs": 4, 00:13:48.375 "num_base_bdevs_discovered": 4, 00:13:48.375 "num_base_bdevs_operational": 4, 00:13:48.375 "base_bdevs_list": [ 00:13:48.375 { 00:13:48.375 "name": "pt1", 00:13:48.375 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:48.375 "is_configured": true, 00:13:48.375 "data_offset": 2048, 00:13:48.375 "data_size": 63488 00:13:48.375 }, 00:13:48.375 { 00:13:48.375 "name": "pt2", 00:13:48.375 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:48.375 "is_configured": true, 00:13:48.375 "data_offset": 2048, 00:13:48.375 "data_size": 63488 00:13:48.375 }, 00:13:48.375 { 00:13:48.375 "name": "pt3", 00:13:48.375 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:48.375 "is_configured": true, 00:13:48.375 "data_offset": 2048, 00:13:48.375 "data_size": 63488 00:13:48.375 }, 00:13:48.375 { 00:13:48.375 "name": "pt4", 00:13:48.375 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:48.375 "is_configured": true, 00:13:48.375 "data_offset": 2048, 00:13:48.375 "data_size": 63488 00:13:48.375 } 00:13:48.375 ] 00:13:48.375 }' 00:13:48.375 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:48.375 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.944 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:13:48.944 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:13:48.944 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:13:48.944 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:13:48.944 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:13:48.944 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:13:48.944 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:48.944 14:13:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:13:48.944 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.944 14:13:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.944 [2024-11-27 14:13:25.994125] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:48.944 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.944 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:13:48.944 "name": "raid_bdev1", 00:13:48.944 "aliases": [ 00:13:48.944 "b2a22b6a-2eec-4ca4-b57d-6571381e3ace" 00:13:48.944 ], 00:13:48.944 "product_name": "Raid Volume", 00:13:48.944 "block_size": 512, 00:13:48.944 "num_blocks": 63488, 00:13:48.944 "uuid": "b2a22b6a-2eec-4ca4-b57d-6571381e3ace", 00:13:48.944 "assigned_rate_limits": { 00:13:48.944 "rw_ios_per_sec": 0, 00:13:48.944 "rw_mbytes_per_sec": 0, 00:13:48.944 "r_mbytes_per_sec": 0, 00:13:48.944 "w_mbytes_per_sec": 0 00:13:48.944 }, 00:13:48.944 "claimed": false, 00:13:48.944 "zoned": false, 00:13:48.944 "supported_io_types": { 00:13:48.944 "read": true, 00:13:48.944 "write": true, 00:13:48.944 "unmap": false, 00:13:48.944 "flush": false, 00:13:48.944 "reset": true, 00:13:48.944 "nvme_admin": false, 00:13:48.944 "nvme_io": false, 00:13:48.944 "nvme_io_md": false, 00:13:48.944 "write_zeroes": true, 00:13:48.944 "zcopy": false, 00:13:48.944 "get_zone_info": false, 00:13:48.944 "zone_management": false, 00:13:48.944 "zone_append": false, 00:13:48.944 "compare": false, 00:13:48.944 "compare_and_write": false, 00:13:48.944 "abort": false, 00:13:48.944 "seek_hole": false, 00:13:48.944 "seek_data": false, 00:13:48.944 "copy": false, 00:13:48.944 "nvme_iov_md": false 00:13:48.944 }, 00:13:48.944 "memory_domains": [ 00:13:48.944 { 00:13:48.944 "dma_device_id": "system", 00:13:48.944 "dma_device_type": 1 00:13:48.944 }, 00:13:48.944 { 00:13:48.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.944 "dma_device_type": 2 00:13:48.944 }, 00:13:48.944 { 00:13:48.944 "dma_device_id": "system", 00:13:48.944 "dma_device_type": 1 00:13:48.944 }, 00:13:48.944 { 00:13:48.944 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.945 "dma_device_type": 2 00:13:48.945 }, 00:13:48.945 { 00:13:48.945 "dma_device_id": "system", 00:13:48.945 "dma_device_type": 1 00:13:48.945 }, 00:13:48.945 { 00:13:48.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.945 "dma_device_type": 2 00:13:48.945 }, 00:13:48.945 { 00:13:48.945 "dma_device_id": "system", 00:13:48.945 "dma_device_type": 1 00:13:48.945 }, 00:13:48.945 { 00:13:48.945 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:48.945 "dma_device_type": 2 00:13:48.945 } 00:13:48.945 ], 00:13:48.945 "driver_specific": { 00:13:48.945 "raid": { 00:13:48.945 "uuid": "b2a22b6a-2eec-4ca4-b57d-6571381e3ace", 00:13:48.945 "strip_size_kb": 0, 00:13:48.945 "state": "online", 00:13:48.945 "raid_level": "raid1", 00:13:48.945 "superblock": true, 00:13:48.945 "num_base_bdevs": 4, 00:13:48.945 "num_base_bdevs_discovered": 4, 00:13:48.945 "num_base_bdevs_operational": 4, 00:13:48.945 "base_bdevs_list": [ 00:13:48.945 { 00:13:48.945 "name": "pt1", 00:13:48.945 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:48.945 "is_configured": true, 00:13:48.945 "data_offset": 2048, 00:13:48.945 "data_size": 63488 00:13:48.945 }, 00:13:48.945 { 00:13:48.945 "name": "pt2", 00:13:48.945 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:48.945 "is_configured": true, 00:13:48.945 "data_offset": 2048, 00:13:48.945 "data_size": 63488 00:13:48.945 }, 00:13:48.945 { 00:13:48.945 "name": "pt3", 00:13:48.945 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:48.945 "is_configured": true, 00:13:48.945 "data_offset": 2048, 00:13:48.945 "data_size": 63488 00:13:48.945 }, 00:13:48.945 { 00:13:48.945 "name": "pt4", 00:13:48.945 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:48.945 "is_configured": true, 00:13:48.945 "data_offset": 2048, 00:13:48.945 "data_size": 63488 00:13:48.945 } 00:13:48.945 ] 00:13:48.945 } 00:13:48.945 } 00:13:48.945 }' 00:13:48.945 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:48.945 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:13:48.945 pt2 00:13:48.945 pt3 00:13:48.945 pt4' 00:13:48.945 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:48.945 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:13:48.945 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:48.945 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:13:48.945 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.945 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:48.945 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:48.945 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:48.945 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:48.945 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:48.945 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:48.945 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:48.945 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:13:48.945 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:48.945 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.205 [2024-11-27 14:13:26.370214] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' b2a22b6a-2eec-4ca4-b57d-6571381e3ace '!=' b2a22b6a-2eec-4ca4-b57d-6571381e3ace ']' 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.205 [2024-11-27 14:13:26.421842] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.205 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.465 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.465 "name": "raid_bdev1", 00:13:49.465 "uuid": "b2a22b6a-2eec-4ca4-b57d-6571381e3ace", 00:13:49.465 "strip_size_kb": 0, 00:13:49.465 "state": "online", 00:13:49.465 "raid_level": "raid1", 00:13:49.465 "superblock": true, 00:13:49.465 "num_base_bdevs": 4, 00:13:49.465 "num_base_bdevs_discovered": 3, 00:13:49.465 "num_base_bdevs_operational": 3, 00:13:49.465 "base_bdevs_list": [ 00:13:49.465 { 00:13:49.465 "name": null, 00:13:49.465 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.465 "is_configured": false, 00:13:49.465 "data_offset": 0, 00:13:49.465 "data_size": 63488 00:13:49.465 }, 00:13:49.465 { 00:13:49.465 "name": "pt2", 00:13:49.465 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:49.465 "is_configured": true, 00:13:49.465 "data_offset": 2048, 00:13:49.465 "data_size": 63488 00:13:49.465 }, 00:13:49.465 { 00:13:49.465 "name": "pt3", 00:13:49.465 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:49.465 "is_configured": true, 00:13:49.465 "data_offset": 2048, 00:13:49.465 "data_size": 63488 00:13:49.465 }, 00:13:49.465 { 00:13:49.465 "name": "pt4", 00:13:49.465 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:49.465 "is_configured": true, 00:13:49.465 "data_offset": 2048, 00:13:49.465 "data_size": 63488 00:13:49.465 } 00:13:49.465 ] 00:13:49.465 }' 00:13:49.465 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.465 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.724 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:49.724 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.724 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.724 [2024-11-27 14:13:26.925944] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:49.724 [2024-11-27 14:13:26.925997] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:49.724 [2024-11-27 14:13:26.926095] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:49.724 [2024-11-27 14:13:26.926251] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:49.724 [2024-11-27 14:13:26.926266] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:13:49.724 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.724 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.724 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:13:49.724 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.724 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.724 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.724 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:13:49.724 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:13:49.724 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:13:49.724 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:49.724 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:13:49.724 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.724 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.724 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.724 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:49.724 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:49.724 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:13:49.724 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.724 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.724 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.724 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:49.724 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:49.724 14:13:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:13:49.724 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.724 14:13:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.984 14:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.984 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:13:49.984 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:13:49.984 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:13:49.984 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:49.984 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:49.984 14:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.984 14:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.984 [2024-11-27 14:13:27.009934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:49.984 [2024-11-27 14:13:27.010009] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.984 [2024-11-27 14:13:27.010038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:13:49.984 [2024-11-27 14:13:27.010051] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.984 [2024-11-27 14:13:27.013096] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.984 [2024-11-27 14:13:27.013169] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:49.984 [2024-11-27 14:13:27.013296] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:49.984 [2024-11-27 14:13:27.013351] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:49.984 pt2 00:13:49.984 14:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.984 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:49.984 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:49.984 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:49.984 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:49.984 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:49.984 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:49.984 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:49.984 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:49.984 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:49.984 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:49.984 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:49.984 14:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:49.984 14:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.984 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:49.984 14:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:49.984 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:49.984 "name": "raid_bdev1", 00:13:49.984 "uuid": "b2a22b6a-2eec-4ca4-b57d-6571381e3ace", 00:13:49.984 "strip_size_kb": 0, 00:13:49.984 "state": "configuring", 00:13:49.984 "raid_level": "raid1", 00:13:49.984 "superblock": true, 00:13:49.984 "num_base_bdevs": 4, 00:13:49.984 "num_base_bdevs_discovered": 1, 00:13:49.984 "num_base_bdevs_operational": 3, 00:13:49.984 "base_bdevs_list": [ 00:13:49.984 { 00:13:49.984 "name": null, 00:13:49.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:49.984 "is_configured": false, 00:13:49.984 "data_offset": 2048, 00:13:49.984 "data_size": 63488 00:13:49.984 }, 00:13:49.984 { 00:13:49.984 "name": "pt2", 00:13:49.984 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:49.984 "is_configured": true, 00:13:49.984 "data_offset": 2048, 00:13:49.984 "data_size": 63488 00:13:49.984 }, 00:13:49.984 { 00:13:49.984 "name": null, 00:13:49.984 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:49.984 "is_configured": false, 00:13:49.984 "data_offset": 2048, 00:13:49.984 "data_size": 63488 00:13:49.984 }, 00:13:49.984 { 00:13:49.984 "name": null, 00:13:49.984 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:49.984 "is_configured": false, 00:13:49.984 "data_offset": 2048, 00:13:49.984 "data_size": 63488 00:13:49.984 } 00:13:49.984 ] 00:13:49.984 }' 00:13:49.984 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:49.984 14:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.266 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:50.266 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:50.266 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:50.266 14:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.266 14:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.266 [2024-11-27 14:13:27.526138] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:50.266 [2024-11-27 14:13:27.526343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.266 [2024-11-27 14:13:27.526388] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:13:50.266 [2024-11-27 14:13:27.526405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.266 [2024-11-27 14:13:27.526999] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.266 [2024-11-27 14:13:27.527069] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:50.266 [2024-11-27 14:13:27.527183] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:50.266 [2024-11-27 14:13:27.527215] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:50.266 pt3 00:13:50.266 14:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.266 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:50.266 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.266 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:50.266 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.266 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.266 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:50.266 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.266 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.266 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.266 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.546 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.546 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.546 14:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.546 14:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.546 14:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.546 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:50.546 "name": "raid_bdev1", 00:13:50.546 "uuid": "b2a22b6a-2eec-4ca4-b57d-6571381e3ace", 00:13:50.546 "strip_size_kb": 0, 00:13:50.546 "state": "configuring", 00:13:50.546 "raid_level": "raid1", 00:13:50.546 "superblock": true, 00:13:50.546 "num_base_bdevs": 4, 00:13:50.546 "num_base_bdevs_discovered": 2, 00:13:50.546 "num_base_bdevs_operational": 3, 00:13:50.546 "base_bdevs_list": [ 00:13:50.546 { 00:13:50.546 "name": null, 00:13:50.546 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:50.546 "is_configured": false, 00:13:50.546 "data_offset": 2048, 00:13:50.546 "data_size": 63488 00:13:50.546 }, 00:13:50.546 { 00:13:50.546 "name": "pt2", 00:13:50.546 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:50.546 "is_configured": true, 00:13:50.546 "data_offset": 2048, 00:13:50.546 "data_size": 63488 00:13:50.546 }, 00:13:50.546 { 00:13:50.546 "name": "pt3", 00:13:50.546 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:50.546 "is_configured": true, 00:13:50.546 "data_offset": 2048, 00:13:50.546 "data_size": 63488 00:13:50.546 }, 00:13:50.546 { 00:13:50.546 "name": null, 00:13:50.546 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:50.546 "is_configured": false, 00:13:50.546 "data_offset": 2048, 00:13:50.546 "data_size": 63488 00:13:50.546 } 00:13:50.546 ] 00:13:50.546 }' 00:13:50.546 14:13:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:50.546 14:13:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.806 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:13:50.806 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:13:50.806 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:13:50.806 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:50.806 14:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.806 14:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.806 [2024-11-27 14:13:28.054347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:50.806 [2024-11-27 14:13:28.054457] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:50.806 [2024-11-27 14:13:28.054495] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:13:50.806 [2024-11-27 14:13:28.054510] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:50.806 [2024-11-27 14:13:28.055137] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:50.806 [2024-11-27 14:13:28.055211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:50.806 [2024-11-27 14:13:28.055319] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:50.806 [2024-11-27 14:13:28.055516] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:50.806 [2024-11-27 14:13:28.055699] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:13:50.806 [2024-11-27 14:13:28.055715] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:50.806 [2024-11-27 14:13:28.056038] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:13:50.806 [2024-11-27 14:13:28.056271] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:13:50.806 [2024-11-27 14:13:28.056291] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:13:50.806 [2024-11-27 14:13:28.056471] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:50.806 pt4 00:13:50.806 14:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:50.806 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:50.806 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:50.806 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:50.806 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:50.806 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:50.806 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:50.806 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:50.806 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:50.806 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:50.806 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:50.806 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.806 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:50.806 14:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:50.806 14:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.806 14:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.065 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.065 "name": "raid_bdev1", 00:13:51.065 "uuid": "b2a22b6a-2eec-4ca4-b57d-6571381e3ace", 00:13:51.065 "strip_size_kb": 0, 00:13:51.065 "state": "online", 00:13:51.065 "raid_level": "raid1", 00:13:51.065 "superblock": true, 00:13:51.065 "num_base_bdevs": 4, 00:13:51.065 "num_base_bdevs_discovered": 3, 00:13:51.065 "num_base_bdevs_operational": 3, 00:13:51.065 "base_bdevs_list": [ 00:13:51.065 { 00:13:51.065 "name": null, 00:13:51.065 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.065 "is_configured": false, 00:13:51.065 "data_offset": 2048, 00:13:51.065 "data_size": 63488 00:13:51.065 }, 00:13:51.065 { 00:13:51.065 "name": "pt2", 00:13:51.065 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:51.065 "is_configured": true, 00:13:51.065 "data_offset": 2048, 00:13:51.065 "data_size": 63488 00:13:51.065 }, 00:13:51.065 { 00:13:51.065 "name": "pt3", 00:13:51.065 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:51.065 "is_configured": true, 00:13:51.065 "data_offset": 2048, 00:13:51.065 "data_size": 63488 00:13:51.065 }, 00:13:51.065 { 00:13:51.065 "name": "pt4", 00:13:51.065 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:51.065 "is_configured": true, 00:13:51.065 "data_offset": 2048, 00:13:51.065 "data_size": 63488 00:13:51.065 } 00:13:51.065 ] 00:13:51.065 }' 00:13:51.065 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.065 14:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.324 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:51.324 14:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.324 14:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.324 [2024-11-27 14:13:28.578438] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:51.324 [2024-11-27 14:13:28.578470] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:51.324 [2024-11-27 14:13:28.578568] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:51.324 [2024-11-27 14:13:28.578690] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:51.324 [2024-11-27 14:13:28.578711] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:13:51.324 14:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.324 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.324 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:13:51.324 14:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.324 14:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.324 14:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.583 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:13:51.583 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:13:51.583 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:13:51.583 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:13:51.583 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:13:51.583 14:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.583 14:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.583 14:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.583 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:51.583 14:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.583 14:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.583 [2024-11-27 14:13:28.654435] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:51.583 [2024-11-27 14:13:28.654519] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:51.583 [2024-11-27 14:13:28.654546] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:13:51.583 [2024-11-27 14:13:28.654564] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:51.583 [2024-11-27 14:13:28.657545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:51.583 [2024-11-27 14:13:28.657593] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:51.583 [2024-11-27 14:13:28.657705] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:51.583 [2024-11-27 14:13:28.657764] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:51.583 [2024-11-27 14:13:28.657961] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:13:51.583 [2024-11-27 14:13:28.657985] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:51.583 [2024-11-27 14:13:28.658006] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:13:51.583 [2024-11-27 14:13:28.658081] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:51.583 [2024-11-27 14:13:28.658221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:51.583 pt1 00:13:51.583 14:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.583 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:13:51.583 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:13:51.583 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:51.583 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:13:51.583 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:51.583 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:51.583 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:51.583 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:51.583 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:51.583 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:51.583 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:51.583 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:51.583 14:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:51.583 14:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:51.583 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:51.583 14:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:51.583 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:51.583 "name": "raid_bdev1", 00:13:51.583 "uuid": "b2a22b6a-2eec-4ca4-b57d-6571381e3ace", 00:13:51.583 "strip_size_kb": 0, 00:13:51.583 "state": "configuring", 00:13:51.583 "raid_level": "raid1", 00:13:51.583 "superblock": true, 00:13:51.583 "num_base_bdevs": 4, 00:13:51.583 "num_base_bdevs_discovered": 2, 00:13:51.583 "num_base_bdevs_operational": 3, 00:13:51.583 "base_bdevs_list": [ 00:13:51.583 { 00:13:51.583 "name": null, 00:13:51.583 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:51.583 "is_configured": false, 00:13:51.583 "data_offset": 2048, 00:13:51.583 "data_size": 63488 00:13:51.583 }, 00:13:51.583 { 00:13:51.584 "name": "pt2", 00:13:51.584 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:51.584 "is_configured": true, 00:13:51.584 "data_offset": 2048, 00:13:51.584 "data_size": 63488 00:13:51.584 }, 00:13:51.584 { 00:13:51.584 "name": "pt3", 00:13:51.584 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:51.584 "is_configured": true, 00:13:51.584 "data_offset": 2048, 00:13:51.584 "data_size": 63488 00:13:51.584 }, 00:13:51.584 { 00:13:51.584 "name": null, 00:13:51.584 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:51.584 "is_configured": false, 00:13:51.584 "data_offset": 2048, 00:13:51.584 "data_size": 63488 00:13:51.584 } 00:13:51.584 ] 00:13:51.584 }' 00:13:51.584 14:13:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:51.584 14:13:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.151 14:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:13:52.151 14:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:52.151 14:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.151 14:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.151 14:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.151 14:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:13:52.151 14:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:52.151 14:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.151 14:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.151 [2024-11-27 14:13:29.234668] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:52.151 [2024-11-27 14:13:29.234743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:52.151 [2024-11-27 14:13:29.234798] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:13:52.151 [2024-11-27 14:13:29.234816] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:52.151 [2024-11-27 14:13:29.235359] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:52.151 [2024-11-27 14:13:29.235399] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:52.151 [2024-11-27 14:13:29.235510] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:52.151 [2024-11-27 14:13:29.235542] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:52.151 [2024-11-27 14:13:29.235707] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:13:52.151 [2024-11-27 14:13:29.235728] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:52.151 [2024-11-27 14:13:29.236057] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:13:52.151 [2024-11-27 14:13:29.236246] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:13:52.151 [2024-11-27 14:13:29.236266] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:13:52.151 [2024-11-27 14:13:29.236451] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:52.151 pt4 00:13:52.151 14:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.151 14:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:13:52.151 14:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:52.151 14:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:52.151 14:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:52.151 14:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:52.151 14:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:13:52.151 14:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:52.151 14:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:52.151 14:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:52.151 14:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:52.151 14:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:52.151 14:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:52.151 14:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.151 14:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.151 14:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.151 14:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:52.151 "name": "raid_bdev1", 00:13:52.151 "uuid": "b2a22b6a-2eec-4ca4-b57d-6571381e3ace", 00:13:52.151 "strip_size_kb": 0, 00:13:52.151 "state": "online", 00:13:52.151 "raid_level": "raid1", 00:13:52.151 "superblock": true, 00:13:52.151 "num_base_bdevs": 4, 00:13:52.151 "num_base_bdevs_discovered": 3, 00:13:52.151 "num_base_bdevs_operational": 3, 00:13:52.151 "base_bdevs_list": [ 00:13:52.151 { 00:13:52.151 "name": null, 00:13:52.151 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:52.151 "is_configured": false, 00:13:52.151 "data_offset": 2048, 00:13:52.151 "data_size": 63488 00:13:52.151 }, 00:13:52.151 { 00:13:52.151 "name": "pt2", 00:13:52.151 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:52.151 "is_configured": true, 00:13:52.151 "data_offset": 2048, 00:13:52.151 "data_size": 63488 00:13:52.151 }, 00:13:52.151 { 00:13:52.151 "name": "pt3", 00:13:52.151 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:52.151 "is_configured": true, 00:13:52.151 "data_offset": 2048, 00:13:52.151 "data_size": 63488 00:13:52.151 }, 00:13:52.151 { 00:13:52.151 "name": "pt4", 00:13:52.151 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:52.151 "is_configured": true, 00:13:52.151 "data_offset": 2048, 00:13:52.151 "data_size": 63488 00:13:52.151 } 00:13:52.151 ] 00:13:52.151 }' 00:13:52.151 14:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:52.151 14:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.718 14:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:13:52.718 14:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:13:52.718 14:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.718 14:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.718 14:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.718 14:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:13:52.718 14:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:13:52.718 14:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:13:52.718 14:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:52.718 14:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.718 [2024-11-27 14:13:29.827234] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:52.718 14:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:52.718 14:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' b2a22b6a-2eec-4ca4-b57d-6571381e3ace '!=' b2a22b6a-2eec-4ca4-b57d-6571381e3ace ']' 00:13:52.718 14:13:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 74590 00:13:52.718 14:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 74590 ']' 00:13:52.718 14:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # kill -0 74590 00:13:52.718 14:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # uname 00:13:52.718 14:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:52.718 14:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74590 00:13:52.718 killing process with pid 74590 00:13:52.718 14:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:52.718 14:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:52.718 14:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74590' 00:13:52.718 14:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@973 -- # kill 74590 00:13:52.718 [2024-11-27 14:13:29.907993] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:52.718 14:13:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@978 -- # wait 74590 00:13:52.718 [2024-11-27 14:13:29.908112] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:52.718 [2024-11-27 14:13:29.908243] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:52.718 [2024-11-27 14:13:29.908263] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:13:53.285 [2024-11-27 14:13:30.266033] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:54.223 14:13:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:13:54.223 00:13:54.223 real 0m9.463s 00:13:54.223 user 0m15.585s 00:13:54.223 sys 0m1.400s 00:13:54.223 14:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:54.223 14:13:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.223 ************************************ 00:13:54.223 END TEST raid_superblock_test 00:13:54.223 ************************************ 00:13:54.223 14:13:31 bdev_raid -- bdev/bdev_raid.sh@971 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:13:54.223 14:13:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:54.223 14:13:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:54.223 14:13:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:54.223 ************************************ 00:13:54.223 START TEST raid_read_error_test 00:13:54.223 ************************************ 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 read 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=read 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.taCKeLMbLV 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75090 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75090 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@835 -- # '[' -z 75090 ']' 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:54.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:54.223 14:13:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:54.223 [2024-11-27 14:13:31.480460] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:13:54.223 [2024-11-27 14:13:31.480675] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75090 ] 00:13:54.530 [2024-11-27 14:13:31.669203] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:54.530 [2024-11-27 14:13:31.802166] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:54.789 [2024-11-27 14:13:32.008497] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:54.789 [2024-11-27 14:13:32.008561] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:55.357 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:55.357 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@868 -- # return 0 00:13:55.357 14:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:55.357 14:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:55.357 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.357 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.357 BaseBdev1_malloc 00:13:55.357 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.357 14:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:13:55.357 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.357 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.357 true 00:13:55.357 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.357 14:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:55.357 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.357 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.357 [2024-11-27 14:13:32.546309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:55.357 [2024-11-27 14:13:32.546401] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.358 [2024-11-27 14:13:32.546434] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:13:55.358 [2024-11-27 14:13:32.546453] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.358 [2024-11-27 14:13:32.549419] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.358 [2024-11-27 14:13:32.549483] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:55.358 BaseBdev1 00:13:55.358 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.358 14:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:55.358 14:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:55.358 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.358 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.358 BaseBdev2_malloc 00:13:55.358 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.358 14:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:13:55.358 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.358 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.358 true 00:13:55.358 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.358 14:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:55.358 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.358 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.358 [2024-11-27 14:13:32.611193] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:55.358 [2024-11-27 14:13:32.611429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.358 [2024-11-27 14:13:32.611465] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:13:55.358 [2024-11-27 14:13:32.611484] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.358 [2024-11-27 14:13:32.614333] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.358 [2024-11-27 14:13:32.614396] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:55.358 BaseBdev2 00:13:55.358 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.358 14:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:55.358 14:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:55.358 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.358 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.617 BaseBdev3_malloc 00:13:55.617 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.617 14:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:13:55.617 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.617 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.617 true 00:13:55.617 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.617 14:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:55.617 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.617 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.617 [2024-11-27 14:13:32.681311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:55.617 [2024-11-27 14:13:32.681389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.617 [2024-11-27 14:13:32.681414] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:13:55.617 [2024-11-27 14:13:32.681430] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.617 [2024-11-27 14:13:32.684286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.617 [2024-11-27 14:13:32.684330] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:55.617 BaseBdev3 00:13:55.617 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.617 14:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:13:55.617 14:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:55.617 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.617 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.617 BaseBdev4_malloc 00:13:55.617 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.617 14:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:13:55.617 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.617 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.617 true 00:13:55.617 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.618 14:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:55.618 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.618 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.618 [2024-11-27 14:13:32.735605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:55.618 [2024-11-27 14:13:32.735880] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.618 [2024-11-27 14:13:32.735918] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:13:55.618 [2024-11-27 14:13:32.735937] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.618 [2024-11-27 14:13:32.738748] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.618 [2024-11-27 14:13:32.738817] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:55.618 BaseBdev4 00:13:55.618 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.618 14:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:13:55.618 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.618 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.618 [2024-11-27 14:13:32.743683] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:55.618 [2024-11-27 14:13:32.746242] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:55.618 [2024-11-27 14:13:32.746336] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:55.618 [2024-11-27 14:13:32.746438] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:55.618 [2024-11-27 14:13:32.746740] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:13:55.618 [2024-11-27 14:13:32.746761] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:13:55.618 [2024-11-27 14:13:32.747129] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:13:55.618 [2024-11-27 14:13:32.747408] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:13:55.618 [2024-11-27 14:13:32.747423] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:13:55.618 [2024-11-27 14:13:32.747686] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:55.618 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.618 14:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:55.618 14:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:55.618 14:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:55.618 14:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:55.618 14:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:55.618 14:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:55.618 14:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:55.618 14:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:55.618 14:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:55.618 14:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:55.618 14:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:55.618 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:55.618 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:55.618 14:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:55.618 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:55.618 14:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:55.618 "name": "raid_bdev1", 00:13:55.618 "uuid": "bf9a1d10-f13a-43ff-8c8c-5acdb8d013a5", 00:13:55.618 "strip_size_kb": 0, 00:13:55.618 "state": "online", 00:13:55.618 "raid_level": "raid1", 00:13:55.618 "superblock": true, 00:13:55.618 "num_base_bdevs": 4, 00:13:55.618 "num_base_bdevs_discovered": 4, 00:13:55.618 "num_base_bdevs_operational": 4, 00:13:55.618 "base_bdevs_list": [ 00:13:55.618 { 00:13:55.618 "name": "BaseBdev1", 00:13:55.618 "uuid": "941e468d-5302-5c4a-8b89-a85d52291929", 00:13:55.618 "is_configured": true, 00:13:55.618 "data_offset": 2048, 00:13:55.618 "data_size": 63488 00:13:55.618 }, 00:13:55.618 { 00:13:55.618 "name": "BaseBdev2", 00:13:55.618 "uuid": "f83462e0-2ed8-5f17-a5f0-1daf13d37c3e", 00:13:55.618 "is_configured": true, 00:13:55.618 "data_offset": 2048, 00:13:55.618 "data_size": 63488 00:13:55.618 }, 00:13:55.618 { 00:13:55.618 "name": "BaseBdev3", 00:13:55.618 "uuid": "edcd76a0-a62b-59cb-95ec-79bd183fcc38", 00:13:55.618 "is_configured": true, 00:13:55.618 "data_offset": 2048, 00:13:55.618 "data_size": 63488 00:13:55.618 }, 00:13:55.618 { 00:13:55.618 "name": "BaseBdev4", 00:13:55.618 "uuid": "1d51657c-fc35-57f5-a214-ef7c3abbed86", 00:13:55.618 "is_configured": true, 00:13:55.618 "data_offset": 2048, 00:13:55.618 "data_size": 63488 00:13:55.618 } 00:13:55.618 ] 00:13:55.618 }' 00:13:55.618 14:13:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:55.618 14:13:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.187 14:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:13:56.187 14:13:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:13:56.187 [2024-11-27 14:13:33.409406] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:13:57.125 14:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:57.125 14:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.125 14:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.125 14:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.125 14:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:13:57.125 14:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:13:57.125 14:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@832 -- # [[ read = \w\r\i\t\e ]] 00:13:57.125 14:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # expected_num_base_bdevs=4 00:13:57.125 14:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:13:57.125 14:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:13:57.125 14:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:13:57.125 14:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:13:57.125 14:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:13:57.125 14:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:13:57.125 14:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:13:57.125 14:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:13:57.125 14:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:13:57.125 14:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:13:57.125 14:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:13:57.125 14:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:57.125 14:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.125 14:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.125 14:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.125 14:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:13:57.125 "name": "raid_bdev1", 00:13:57.125 "uuid": "bf9a1d10-f13a-43ff-8c8c-5acdb8d013a5", 00:13:57.125 "strip_size_kb": 0, 00:13:57.125 "state": "online", 00:13:57.125 "raid_level": "raid1", 00:13:57.125 "superblock": true, 00:13:57.125 "num_base_bdevs": 4, 00:13:57.125 "num_base_bdevs_discovered": 4, 00:13:57.125 "num_base_bdevs_operational": 4, 00:13:57.125 "base_bdevs_list": [ 00:13:57.125 { 00:13:57.125 "name": "BaseBdev1", 00:13:57.125 "uuid": "941e468d-5302-5c4a-8b89-a85d52291929", 00:13:57.125 "is_configured": true, 00:13:57.125 "data_offset": 2048, 00:13:57.125 "data_size": 63488 00:13:57.125 }, 00:13:57.125 { 00:13:57.125 "name": "BaseBdev2", 00:13:57.125 "uuid": "f83462e0-2ed8-5f17-a5f0-1daf13d37c3e", 00:13:57.125 "is_configured": true, 00:13:57.125 "data_offset": 2048, 00:13:57.125 "data_size": 63488 00:13:57.125 }, 00:13:57.125 { 00:13:57.125 "name": "BaseBdev3", 00:13:57.125 "uuid": "edcd76a0-a62b-59cb-95ec-79bd183fcc38", 00:13:57.125 "is_configured": true, 00:13:57.125 "data_offset": 2048, 00:13:57.125 "data_size": 63488 00:13:57.125 }, 00:13:57.125 { 00:13:57.126 "name": "BaseBdev4", 00:13:57.126 "uuid": "1d51657c-fc35-57f5-a214-ef7c3abbed86", 00:13:57.126 "is_configured": true, 00:13:57.126 "data_offset": 2048, 00:13:57.126 "data_size": 63488 00:13:57.126 } 00:13:57.126 ] 00:13:57.126 }' 00:13:57.126 14:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:13:57.126 14:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.694 14:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:13:57.694 14:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:57.694 14:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:57.694 [2024-11-27 14:13:34.814946] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:57.694 [2024-11-27 14:13:34.815117] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:57.694 [2024-11-27 14:13:34.818672] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:57.694 { 00:13:57.694 "results": [ 00:13:57.694 { 00:13:57.694 "job": "raid_bdev1", 00:13:57.694 "core_mask": "0x1", 00:13:57.694 "workload": "randrw", 00:13:57.694 "percentage": 50, 00:13:57.694 "status": "finished", 00:13:57.694 "queue_depth": 1, 00:13:57.694 "io_size": 131072, 00:13:57.694 "runtime": 1.403094, 00:13:57.694 "iops": 7679.456971521509, 00:13:57.694 "mibps": 959.9321214401887, 00:13:57.694 "io_failed": 0, 00:13:57.694 "io_timeout": 0, 00:13:57.694 "avg_latency_us": 125.91700991352037, 00:13:57.694 "min_latency_us": 38.167272727272724, 00:13:57.694 "max_latency_us": 2010.7636363636364 00:13:57.694 } 00:13:57.694 ], 00:13:57.694 "core_count": 1 00:13:57.694 } 00:13:57.694 [2024-11-27 14:13:34.818898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:57.694 [2024-11-27 14:13:34.819124] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:57.694 [2024-11-27 14:13:34.819150] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state offline 00:13:57.694 14:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:57.694 14:13:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75090 00:13:57.694 14:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@954 -- # '[' -z 75090 ']' 00:13:57.694 14:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # kill -0 75090 00:13:57.694 14:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # uname 00:13:57.694 14:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:57.694 14:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75090 00:13:57.694 killing process with pid 75090 00:13:57.694 14:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:57.694 14:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:57.694 14:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75090' 00:13:57.694 14:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@973 -- # kill 75090 00:13:57.694 [2024-11-27 14:13:34.861398] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:57.694 14:13:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@978 -- # wait 75090 00:13:57.953 [2024-11-27 14:13:35.154583] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:59.331 14:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.taCKeLMbLV 00:13:59.331 14:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:13:59.331 14:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:13:59.331 14:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:13:59.331 14:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:13:59.331 14:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:13:59.331 14:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:13:59.331 ************************************ 00:13:59.331 END TEST raid_read_error_test 00:13:59.331 ************************************ 00:13:59.331 14:13:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:13:59.331 00:13:59.331 real 0m4.932s 00:13:59.331 user 0m6.095s 00:13:59.331 sys 0m0.621s 00:13:59.331 14:13:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:59.331 14:13:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.331 14:13:36 bdev_raid -- bdev/bdev_raid.sh@972 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:13:59.331 14:13:36 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:13:59.331 14:13:36 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:59.331 14:13:36 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:59.331 ************************************ 00:13:59.331 START TEST raid_write_error_test 00:13:59.331 ************************************ 00:13:59.331 14:13:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1129 -- # raid_io_error_test raid1 4 write 00:13:59.331 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local raid_level=raid1 00:13:59.331 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local num_base_bdevs=4 00:13:59.331 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local error_io_type=write 00:13:59.331 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i = 1 )) 00:13:59.331 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:59.331 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev1 00:13:59.331 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:59.331 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:59.331 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev2 00:13:59.331 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:59.331 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:59.331 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev3 00:13:59.331 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:59.331 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:59.331 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # echo BaseBdev4 00:13:59.331 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i++ )) 00:13:59.331 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # (( i <= num_base_bdevs )) 00:13:59.331 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:59.331 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local base_bdevs 00:13:59.331 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local raid_bdev_name=raid_bdev1 00:13:59.331 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local strip_size 00:13:59.331 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local create_arg 00:13:59.331 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@797 -- # local bdevperf_log 00:13:59.331 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # local fail_per_s 00:13:59.331 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # '[' raid1 '!=' raid1 ']' 00:13:59.331 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@804 -- # strip_size=0 00:13:59.331 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # mktemp -p /raidtest 00:13:59.331 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # bdevperf_log=/raidtest/tmp.3NnK9k49un 00:13:59.331 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@810 -- # raid_pid=75230 00:13:59.331 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@811 -- # waitforlisten 75230 00:13:59.332 14:13:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:59.332 14:13:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@835 -- # '[' -z 75230 ']' 00:13:59.332 14:13:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.332 14:13:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:59.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.332 14:13:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.332 14:13:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:59.332 14:13:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.332 [2024-11-27 14:13:36.447484] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:13:59.332 [2024-11-27 14:13:36.447640] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75230 ] 00:13:59.590 [2024-11-27 14:13:36.630192] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.590 [2024-11-27 14:13:36.786929] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.850 [2024-11-27 14:13:37.002820] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.850 [2024-11-27 14:13:37.002892] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:00.419 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:00.419 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@868 -- # return 0 00:14:00.419 14:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:00.419 14:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:00.419 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.419 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.419 BaseBdev1_malloc 00:14:00.419 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.419 14:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev1_malloc 00:14:00.419 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.419 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.419 true 00:14:00.419 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.419 14:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:00.419 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.419 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.419 [2024-11-27 14:13:37.482720] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:00.419 [2024-11-27 14:13:37.482806] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.419 [2024-11-27 14:13:37.482838] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007580 00:14:00.419 [2024-11-27 14:13:37.482856] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.419 [2024-11-27 14:13:37.485696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.419 [2024-11-27 14:13:37.485760] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:00.419 BaseBdev1 00:14:00.419 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.419 14:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:00.419 14:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:00.419 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.419 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.419 BaseBdev2_malloc 00:14:00.419 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.419 14:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev2_malloc 00:14:00.419 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.419 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.419 true 00:14:00.419 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.419 14:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:00.419 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.419 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.419 [2024-11-27 14:13:37.547611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:00.419 [2024-11-27 14:13:37.547845] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.420 [2024-11-27 14:13:37.547886] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:14:00.420 [2024-11-27 14:13:37.547905] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.420 [2024-11-27 14:13:37.550833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.420 [2024-11-27 14:13:37.551013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:00.420 BaseBdev2 00:14:00.420 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.420 14:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:00.420 14:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:00.420 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.420 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.420 BaseBdev3_malloc 00:14:00.420 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.420 14:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev3_malloc 00:14:00.420 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.420 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.420 true 00:14:00.420 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.420 14:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:00.420 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.420 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.420 [2024-11-27 14:13:37.626592] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:00.420 [2024-11-27 14:13:37.626684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.420 [2024-11-27 14:13:37.626712] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:14:00.420 [2024-11-27 14:13:37.626730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.420 [2024-11-27 14:13:37.629507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.420 [2024-11-27 14:13:37.629557] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:00.420 BaseBdev3 00:14:00.420 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.420 14:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # for bdev in "${base_bdevs[@]}" 00:14:00.420 14:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:00.420 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.420 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.420 BaseBdev4_malloc 00:14:00.420 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.420 14:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@816 -- # rpc_cmd bdev_error_create BaseBdev4_malloc 00:14:00.420 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.420 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.420 true 00:14:00.420 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.420 14:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@817 -- # rpc_cmd bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:00.420 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.420 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.420 [2024-11-27 14:13:37.687693] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:00.420 [2024-11-27 14:13:37.687940] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:00.420 [2024-11-27 14:13:37.687977] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:00.420 [2024-11-27 14:13:37.687996] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:00.420 [2024-11-27 14:13:37.691010] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:00.420 [2024-11-27 14:13:37.691173] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:00.420 BaseBdev4 00:14:00.420 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.420 14:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@821 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 -s 00:14:00.420 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.420 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.679 [2024-11-27 14:13:37.699820] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:00.679 [2024-11-27 14:13:37.702342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:00.679 [2024-11-27 14:13:37.702444] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:00.679 [2024-11-27 14:13:37.702553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:00.679 [2024-11-27 14:13:37.702901] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008580 00:14:00.679 [2024-11-27 14:13:37.702927] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:00.679 [2024-11-27 14:13:37.703230] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000068a0 00:14:00.679 [2024-11-27 14:13:37.703469] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008580 00:14:00.679 [2024-11-27 14:13:37.703484] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008580 00:14:00.679 [2024-11-27 14:13:37.703734] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:00.679 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.679 14:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@822 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:14:00.679 14:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:00.679 14:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:00.679 14:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:00.679 14:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:00.679 14:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:14:00.679 14:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:00.679 14:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:00.679 14:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:00.679 14:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:00.679 14:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:00.679 14:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:00.679 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:00.679 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.679 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:00.679 14:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:00.679 "name": "raid_bdev1", 00:14:00.679 "uuid": "8e98bb9a-dee3-4eef-b660-fcca93d67672", 00:14:00.679 "strip_size_kb": 0, 00:14:00.679 "state": "online", 00:14:00.679 "raid_level": "raid1", 00:14:00.679 "superblock": true, 00:14:00.679 "num_base_bdevs": 4, 00:14:00.679 "num_base_bdevs_discovered": 4, 00:14:00.679 "num_base_bdevs_operational": 4, 00:14:00.679 "base_bdevs_list": [ 00:14:00.679 { 00:14:00.679 "name": "BaseBdev1", 00:14:00.679 "uuid": "35496854-d5e1-5d02-a844-eaa494730d21", 00:14:00.679 "is_configured": true, 00:14:00.679 "data_offset": 2048, 00:14:00.679 "data_size": 63488 00:14:00.679 }, 00:14:00.679 { 00:14:00.679 "name": "BaseBdev2", 00:14:00.679 "uuid": "087f3649-b16c-582d-9cb0-faf8b8187ae3", 00:14:00.679 "is_configured": true, 00:14:00.679 "data_offset": 2048, 00:14:00.679 "data_size": 63488 00:14:00.679 }, 00:14:00.679 { 00:14:00.679 "name": "BaseBdev3", 00:14:00.679 "uuid": "ad8393d9-de8f-5d7c-8deb-df49ad4d0213", 00:14:00.679 "is_configured": true, 00:14:00.679 "data_offset": 2048, 00:14:00.679 "data_size": 63488 00:14:00.679 }, 00:14:00.679 { 00:14:00.679 "name": "BaseBdev4", 00:14:00.679 "uuid": "9c20f172-f96e-5150-962a-647c5f805596", 00:14:00.679 "is_configured": true, 00:14:00.679 "data_offset": 2048, 00:14:00.679 "data_size": 63488 00:14:00.679 } 00:14:00.679 ] 00:14:00.679 }' 00:14:00.679 14:13:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:00.679 14:13:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:00.939 14:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@826 -- # sleep 1 00:14:00.939 14:13:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@825 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:01.198 [2024-11-27 14:13:38.333522] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006a40 00:14:02.133 14:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # rpc_cmd bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:02.133 14:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.133 14:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.133 [2024-11-27 14:13:39.185107] bdev_raid.c:2276:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:14:02.133 [2024-11-27 14:13:39.185180] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:02.133 [2024-11-27 14:13:39.185452] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006a40 00:14:02.133 14:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.133 14:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # local expected_num_base_bdevs 00:14:02.133 14:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ raid1 = \r\a\i\d\1 ]] 00:14:02.133 14:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@832 -- # [[ write = \w\r\i\t\e ]] 00:14:02.133 14:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:14:02.133 14:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:14:02.133 14:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:02.133 14:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:02.133 14:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:02.133 14:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:02.133 14:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:14:02.133 14:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:02.133 14:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:02.133 14:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:02.133 14:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:02.133 14:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:02.133 14:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:02.133 14:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.133 14:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.133 14:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.133 14:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:02.133 "name": "raid_bdev1", 00:14:02.133 "uuid": "8e98bb9a-dee3-4eef-b660-fcca93d67672", 00:14:02.133 "strip_size_kb": 0, 00:14:02.133 "state": "online", 00:14:02.133 "raid_level": "raid1", 00:14:02.133 "superblock": true, 00:14:02.133 "num_base_bdevs": 4, 00:14:02.133 "num_base_bdevs_discovered": 3, 00:14:02.133 "num_base_bdevs_operational": 3, 00:14:02.133 "base_bdevs_list": [ 00:14:02.133 { 00:14:02.133 "name": null, 00:14:02.133 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:02.133 "is_configured": false, 00:14:02.133 "data_offset": 0, 00:14:02.133 "data_size": 63488 00:14:02.133 }, 00:14:02.133 { 00:14:02.133 "name": "BaseBdev2", 00:14:02.133 "uuid": "087f3649-b16c-582d-9cb0-faf8b8187ae3", 00:14:02.133 "is_configured": true, 00:14:02.133 "data_offset": 2048, 00:14:02.133 "data_size": 63488 00:14:02.133 }, 00:14:02.133 { 00:14:02.133 "name": "BaseBdev3", 00:14:02.133 "uuid": "ad8393d9-de8f-5d7c-8deb-df49ad4d0213", 00:14:02.133 "is_configured": true, 00:14:02.133 "data_offset": 2048, 00:14:02.133 "data_size": 63488 00:14:02.133 }, 00:14:02.133 { 00:14:02.133 "name": "BaseBdev4", 00:14:02.133 "uuid": "9c20f172-f96e-5150-962a-647c5f805596", 00:14:02.133 "is_configured": true, 00:14:02.133 "data_offset": 2048, 00:14:02.133 "data_size": 63488 00:14:02.133 } 00:14:02.133 ] 00:14:02.133 }' 00:14:02.133 14:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:02.133 14:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.722 14:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:02.723 14:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.723 14:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:02.723 [2024-11-27 14:13:39.717046] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:02.723 [2024-11-27 14:13:39.717229] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:02.723 [2024-11-27 14:13:39.720905] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:02.723 [2024-11-27 14:13:39.721079] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:02.723 [2024-11-27 14:13:39.721322] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:02.723 [2024-11-27 14:13:39.721509] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, sta{ 00:14:02.723 "results": [ 00:14:02.723 { 00:14:02.723 "job": "raid_bdev1", 00:14:02.723 "core_mask": "0x1", 00:14:02.723 "workload": "randrw", 00:14:02.723 "percentage": 50, 00:14:02.723 "status": "finished", 00:14:02.723 "queue_depth": 1, 00:14:02.723 "io_size": 131072, 00:14:02.723 "runtime": 1.38121, 00:14:02.723 "iops": 8100.144076570543, 00:14:02.723 "mibps": 1012.5180095713179, 00:14:02.723 "io_failed": 0, 00:14:02.723 "io_timeout": 0, 00:14:02.723 "avg_latency_us": 119.03369714304289, 00:14:02.723 "min_latency_us": 38.167272727272724, 00:14:02.723 "max_latency_us": 1980.9745454545455 00:14:02.723 } 00:14:02.723 ], 00:14:02.723 "core_count": 1 00:14:02.723 } 00:14:02.723 te offline 00:14:02.723 14:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.723 14:13:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@841 -- # killprocess 75230 00:14:02.723 14:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@954 -- # '[' -z 75230 ']' 00:14:02.723 14:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # kill -0 75230 00:14:02.723 14:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # uname 00:14:02.723 14:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:02.723 14:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75230 00:14:02.723 killing process with pid 75230 00:14:02.723 14:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:02.723 14:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:02.723 14:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75230' 00:14:02.723 14:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@973 -- # kill 75230 00:14:02.723 [2024-11-27 14:13:39.762007] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:02.723 14:13:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@978 -- # wait 75230 00:14:02.982 [2024-11-27 14:13:40.052035] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:03.917 14:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep -v Job /raidtest/tmp.3NnK9k49un 00:14:03.918 14:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # grep raid_bdev1 00:14:03.918 14:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # awk '{print $6}' 00:14:03.918 ************************************ 00:14:03.918 END TEST raid_write_error_test 00:14:03.918 ************************************ 00:14:03.918 14:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # fail_per_s=0.00 00:14:03.918 14:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@846 -- # has_redundancy raid1 00:14:03.918 14:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:14:03.918 14:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@199 -- # return 0 00:14:03.918 14:13:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.00 = \0\.\0\0 ]] 00:14:03.918 00:14:03.918 real 0m4.818s 00:14:03.918 user 0m5.947s 00:14:03.918 sys 0m0.574s 00:14:03.918 14:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:03.918 14:13:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.177 14:13:41 bdev_raid -- bdev/bdev_raid.sh@976 -- # '[' true = true ']' 00:14:04.177 14:13:41 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:14:04.177 14:13:41 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 2 false false true 00:14:04.177 14:13:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:04.177 14:13:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:04.177 14:13:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:04.177 ************************************ 00:14:04.177 START TEST raid_rebuild_test 00:14:04.177 ************************************ 00:14:04.177 14:13:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false false true 00:14:04.177 14:13:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:04.177 14:13:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:04.177 14:13:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:04.177 14:13:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:04.177 14:13:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:04.177 14:13:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:04.177 14:13:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:04.177 14:13:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:04.177 14:13:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:04.177 14:13:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:04.177 14:13:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:04.177 14:13:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:04.177 14:13:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:04.177 14:13:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:04.177 14:13:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:04.177 14:13:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:04.177 14:13:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:04.177 14:13:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:04.177 14:13:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:04.177 14:13:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:04.177 14:13:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:04.177 14:13:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:04.177 14:13:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:04.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.177 14:13:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=75379 00:14:04.177 14:13:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 75379 00:14:04.177 14:13:41 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:04.177 14:13:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 75379 ']' 00:14:04.177 14:13:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.177 14:13:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:04.177 14:13:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.177 14:13:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:04.177 14:13:41 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.177 [2024-11-27 14:13:41.333942] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:14:04.177 [2024-11-27 14:13:41.334500] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75379 ] 00:14:04.177 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:04.177 Zero copy mechanism will not be used. 00:14:04.436 [2024-11-27 14:13:41.543284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.436 [2024-11-27 14:13:41.671068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.696 [2024-11-27 14:13:41.866592] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:04.696 [2024-11-27 14:13:41.866687] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.264 BaseBdev1_malloc 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.264 [2024-11-27 14:13:42.391196] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:05.264 [2024-11-27 14:13:42.391463] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.264 [2024-11-27 14:13:42.391504] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:05.264 [2024-11-27 14:13:42.391524] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.264 [2024-11-27 14:13:42.394377] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.264 [2024-11-27 14:13:42.394436] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:05.264 BaseBdev1 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.264 BaseBdev2_malloc 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.264 [2024-11-27 14:13:42.450332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:05.264 [2024-11-27 14:13:42.450409] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.264 [2024-11-27 14:13:42.450442] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:05.264 [2024-11-27 14:13:42.450460] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.264 [2024-11-27 14:13:42.453369] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.264 [2024-11-27 14:13:42.453417] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:05.264 BaseBdev2 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.264 spare_malloc 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.264 spare_delay 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.264 [2024-11-27 14:13:42.524252] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:05.264 [2024-11-27 14:13:42.524343] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:05.264 [2024-11-27 14:13:42.524373] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:05.264 [2024-11-27 14:13:42.524390] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:05.264 [2024-11-27 14:13:42.527427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:05.264 [2024-11-27 14:13:42.527644] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:05.264 spare 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.264 [2024-11-27 14:13:42.532362] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:05.264 [2024-11-27 14:13:42.535003] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:05.264 [2024-11-27 14:13:42.535177] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:05.264 [2024-11-27 14:13:42.535198] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:05.264 [2024-11-27 14:13:42.535551] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:05.264 [2024-11-27 14:13:42.535829] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:05.264 [2024-11-27 14:13:42.535865] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:05.264 [2024-11-27 14:13:42.536086] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:05.264 14:13:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.265 14:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:05.265 14:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:05.265 14:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:05.265 14:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:05.265 14:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:05.265 14:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:05.265 14:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:05.265 14:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:05.265 14:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:05.265 14:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:05.523 14:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:05.523 14:13:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.523 14:13:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.524 14:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.524 14:13:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.524 14:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:05.524 "name": "raid_bdev1", 00:14:05.524 "uuid": "27131d98-84a0-4ec6-b496-47a1a9240aa5", 00:14:05.524 "strip_size_kb": 0, 00:14:05.524 "state": "online", 00:14:05.524 "raid_level": "raid1", 00:14:05.524 "superblock": false, 00:14:05.524 "num_base_bdevs": 2, 00:14:05.524 "num_base_bdevs_discovered": 2, 00:14:05.524 "num_base_bdevs_operational": 2, 00:14:05.524 "base_bdevs_list": [ 00:14:05.524 { 00:14:05.524 "name": "BaseBdev1", 00:14:05.524 "uuid": "29054484-c0fd-5c2f-869d-1cb15ce60210", 00:14:05.524 "is_configured": true, 00:14:05.524 "data_offset": 0, 00:14:05.524 "data_size": 65536 00:14:05.524 }, 00:14:05.524 { 00:14:05.524 "name": "BaseBdev2", 00:14:05.524 "uuid": "4b2fad7e-7a49-50e6-b6f0-2de9de8562c0", 00:14:05.524 "is_configured": true, 00:14:05.524 "data_offset": 0, 00:14:05.524 "data_size": 65536 00:14:05.524 } 00:14:05.524 ] 00:14:05.524 }' 00:14:05.524 14:13:42 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:05.524 14:13:42 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.782 14:13:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:05.782 14:13:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.782 14:13:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:05.782 14:13:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.782 [2024-11-27 14:13:43.052921] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:06.041 14:13:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.041 14:13:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:06.041 14:13:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:06.041 14:13:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:06.041 14:13:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:06.041 14:13:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.041 14:13:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:06.041 14:13:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:06.041 14:13:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:06.041 14:13:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:06.041 14:13:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:06.041 14:13:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:06.041 14:13:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:06.041 14:13:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:06.041 14:13:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:06.042 14:13:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:06.042 14:13:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:06.042 14:13:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:06.042 14:13:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:06.042 14:13:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:06.042 14:13:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:06.303 [2024-11-27 14:13:43.444735] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:06.303 /dev/nbd0 00:14:06.303 14:13:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:06.303 14:13:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:06.303 14:13:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:06.303 14:13:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:06.303 14:13:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:06.303 14:13:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:06.303 14:13:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:06.303 14:13:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:06.303 14:13:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:06.303 14:13:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:06.303 14:13:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:06.303 1+0 records in 00:14:06.303 1+0 records out 00:14:06.303 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000277374 s, 14.8 MB/s 00:14:06.303 14:13:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.303 14:13:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:06.303 14:13:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:06.303 14:13:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:06.303 14:13:43 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:06.303 14:13:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:06.304 14:13:43 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:06.304 14:13:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:06.304 14:13:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:06.304 14:13:43 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:14:12.902 65536+0 records in 00:14:12.902 65536+0 records out 00:14:12.902 33554432 bytes (34 MB, 32 MiB) copied, 6.39008 s, 5.3 MB/s 00:14:12.902 14:13:49 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:12.902 14:13:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:12.902 14:13:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:12.902 14:13:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:12.902 14:13:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:12.902 14:13:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:12.902 14:13:49 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:13.162 [2024-11-27 14:13:50.192708] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.162 14:13:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:13.162 14:13:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:13.162 14:13:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:13.162 14:13:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:13.162 14:13:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:13.162 14:13:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:13.162 14:13:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:13.162 14:13:50 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:13.162 14:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:13.162 14:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.162 14:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.162 [2024-11-27 14:13:50.224768] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:13.162 14:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.162 14:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:13.162 14:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:13.162 14:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:13.162 14:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:13.162 14:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:13.162 14:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:13.162 14:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:13.162 14:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:13.162 14:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:13.162 14:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:13.162 14:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:13.162 14:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:13.162 14:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.162 14:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.162 14:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.162 14:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:13.162 "name": "raid_bdev1", 00:14:13.162 "uuid": "27131d98-84a0-4ec6-b496-47a1a9240aa5", 00:14:13.162 "strip_size_kb": 0, 00:14:13.162 "state": "online", 00:14:13.162 "raid_level": "raid1", 00:14:13.162 "superblock": false, 00:14:13.162 "num_base_bdevs": 2, 00:14:13.162 "num_base_bdevs_discovered": 1, 00:14:13.162 "num_base_bdevs_operational": 1, 00:14:13.162 "base_bdevs_list": [ 00:14:13.162 { 00:14:13.162 "name": null, 00:14:13.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.162 "is_configured": false, 00:14:13.162 "data_offset": 0, 00:14:13.162 "data_size": 65536 00:14:13.162 }, 00:14:13.162 { 00:14:13.162 "name": "BaseBdev2", 00:14:13.162 "uuid": "4b2fad7e-7a49-50e6-b6f0-2de9de8562c0", 00:14:13.162 "is_configured": true, 00:14:13.162 "data_offset": 0, 00:14:13.162 "data_size": 65536 00:14:13.162 } 00:14:13.162 ] 00:14:13.162 }' 00:14:13.162 14:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:13.162 14:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.731 14:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:13.731 14:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:13.731 14:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.731 [2024-11-27 14:13:50.745025] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:13.731 [2024-11-27 14:13:50.762100] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09bd0 00:14:13.731 14:13:50 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:13.731 14:13:50 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:13.731 [2024-11-27 14:13:50.764801] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:14.669 14:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:14.669 14:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:14.669 14:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:14.669 14:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:14.669 14:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:14.669 14:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.669 14:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.669 14:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.669 14:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.669 14:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.669 14:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:14.669 "name": "raid_bdev1", 00:14:14.669 "uuid": "27131d98-84a0-4ec6-b496-47a1a9240aa5", 00:14:14.669 "strip_size_kb": 0, 00:14:14.669 "state": "online", 00:14:14.669 "raid_level": "raid1", 00:14:14.669 "superblock": false, 00:14:14.669 "num_base_bdevs": 2, 00:14:14.669 "num_base_bdevs_discovered": 2, 00:14:14.669 "num_base_bdevs_operational": 2, 00:14:14.669 "process": { 00:14:14.669 "type": "rebuild", 00:14:14.669 "target": "spare", 00:14:14.669 "progress": { 00:14:14.670 "blocks": 20480, 00:14:14.670 "percent": 31 00:14:14.670 } 00:14:14.670 }, 00:14:14.670 "base_bdevs_list": [ 00:14:14.670 { 00:14:14.670 "name": "spare", 00:14:14.670 "uuid": "1204d48d-2daa-5755-8792-8fc47a649bc9", 00:14:14.670 "is_configured": true, 00:14:14.670 "data_offset": 0, 00:14:14.670 "data_size": 65536 00:14:14.670 }, 00:14:14.670 { 00:14:14.670 "name": "BaseBdev2", 00:14:14.670 "uuid": "4b2fad7e-7a49-50e6-b6f0-2de9de8562c0", 00:14:14.670 "is_configured": true, 00:14:14.670 "data_offset": 0, 00:14:14.670 "data_size": 65536 00:14:14.670 } 00:14:14.670 ] 00:14:14.670 }' 00:14:14.670 14:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:14.670 14:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:14.670 14:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:14.670 14:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:14.670 14:13:51 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:14.670 14:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.670 14:13:51 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.670 [2024-11-27 14:13:51.934420] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:14.929 [2024-11-27 14:13:51.974354] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:14.929 [2024-11-27 14:13:51.974632] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:14.929 [2024-11-27 14:13:51.974766] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:14.929 [2024-11-27 14:13:51.974906] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:14.929 14:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.929 14:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:14.929 14:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:14.929 14:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:14.929 14:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:14.929 14:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:14.929 14:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:14.929 14:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:14.929 14:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:14.929 14:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:14.929 14:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:14.929 14:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:14.929 14:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:14.929 14:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:14.929 14:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.929 14:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:14.929 14:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:14.929 "name": "raid_bdev1", 00:14:14.929 "uuid": "27131d98-84a0-4ec6-b496-47a1a9240aa5", 00:14:14.929 "strip_size_kb": 0, 00:14:14.929 "state": "online", 00:14:14.929 "raid_level": "raid1", 00:14:14.929 "superblock": false, 00:14:14.929 "num_base_bdevs": 2, 00:14:14.929 "num_base_bdevs_discovered": 1, 00:14:14.929 "num_base_bdevs_operational": 1, 00:14:14.929 "base_bdevs_list": [ 00:14:14.929 { 00:14:14.929 "name": null, 00:14:14.929 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:14.929 "is_configured": false, 00:14:14.929 "data_offset": 0, 00:14:14.929 "data_size": 65536 00:14:14.929 }, 00:14:14.929 { 00:14:14.929 "name": "BaseBdev2", 00:14:14.929 "uuid": "4b2fad7e-7a49-50e6-b6f0-2de9de8562c0", 00:14:14.929 "is_configured": true, 00:14:14.929 "data_offset": 0, 00:14:14.929 "data_size": 65536 00:14:14.929 } 00:14:14.929 ] 00:14:14.929 }' 00:14:14.929 14:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:14.929 14:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.497 14:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:15.497 14:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:15.497 14:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:15.497 14:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:15.497 14:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:15.497 14:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:15.497 14:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:15.497 14:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.497 14:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.497 14:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.497 14:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:15.497 "name": "raid_bdev1", 00:14:15.497 "uuid": "27131d98-84a0-4ec6-b496-47a1a9240aa5", 00:14:15.497 "strip_size_kb": 0, 00:14:15.497 "state": "online", 00:14:15.497 "raid_level": "raid1", 00:14:15.497 "superblock": false, 00:14:15.497 "num_base_bdevs": 2, 00:14:15.497 "num_base_bdevs_discovered": 1, 00:14:15.497 "num_base_bdevs_operational": 1, 00:14:15.497 "base_bdevs_list": [ 00:14:15.497 { 00:14:15.497 "name": null, 00:14:15.497 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:15.497 "is_configured": false, 00:14:15.497 "data_offset": 0, 00:14:15.497 "data_size": 65536 00:14:15.497 }, 00:14:15.497 { 00:14:15.497 "name": "BaseBdev2", 00:14:15.497 "uuid": "4b2fad7e-7a49-50e6-b6f0-2de9de8562c0", 00:14:15.497 "is_configured": true, 00:14:15.497 "data_offset": 0, 00:14:15.497 "data_size": 65536 00:14:15.497 } 00:14:15.497 ] 00:14:15.497 }' 00:14:15.497 14:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:15.497 14:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:15.497 14:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:15.497 14:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:15.497 14:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:15.497 14:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:15.497 14:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:15.497 [2024-11-27 14:13:52.692532] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:15.498 [2024-11-27 14:13:52.708390] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09ca0 00:14:15.498 14:13:52 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:15.498 14:13:52 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:15.498 [2024-11-27 14:13:52.710928] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:16.877 14:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.877 14:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.877 14:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.877 14:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.877 14:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.877 14:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.877 14:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.877 14:13:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.877 14:13:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.877 14:13:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.877 14:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.877 "name": "raid_bdev1", 00:14:16.877 "uuid": "27131d98-84a0-4ec6-b496-47a1a9240aa5", 00:14:16.877 "strip_size_kb": 0, 00:14:16.877 "state": "online", 00:14:16.877 "raid_level": "raid1", 00:14:16.877 "superblock": false, 00:14:16.877 "num_base_bdevs": 2, 00:14:16.877 "num_base_bdevs_discovered": 2, 00:14:16.877 "num_base_bdevs_operational": 2, 00:14:16.877 "process": { 00:14:16.878 "type": "rebuild", 00:14:16.878 "target": "spare", 00:14:16.878 "progress": { 00:14:16.878 "blocks": 20480, 00:14:16.878 "percent": 31 00:14:16.878 } 00:14:16.878 }, 00:14:16.878 "base_bdevs_list": [ 00:14:16.878 { 00:14:16.878 "name": "spare", 00:14:16.878 "uuid": "1204d48d-2daa-5755-8792-8fc47a649bc9", 00:14:16.878 "is_configured": true, 00:14:16.878 "data_offset": 0, 00:14:16.878 "data_size": 65536 00:14:16.878 }, 00:14:16.878 { 00:14:16.878 "name": "BaseBdev2", 00:14:16.878 "uuid": "4b2fad7e-7a49-50e6-b6f0-2de9de8562c0", 00:14:16.878 "is_configured": true, 00:14:16.878 "data_offset": 0, 00:14:16.878 "data_size": 65536 00:14:16.878 } 00:14:16.878 ] 00:14:16.878 }' 00:14:16.878 14:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.878 14:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.878 14:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.878 14:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.878 14:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:16.878 14:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:16.878 14:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:16.878 14:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:16.878 14:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=400 00:14:16.878 14:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:16.878 14:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:16.878 14:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:16.878 14:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:16.878 14:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:16.878 14:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:16.878 14:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:16.878 14:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:16.878 14:13:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:16.878 14:13:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:16.878 14:13:53 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:16.878 14:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:16.878 "name": "raid_bdev1", 00:14:16.878 "uuid": "27131d98-84a0-4ec6-b496-47a1a9240aa5", 00:14:16.878 "strip_size_kb": 0, 00:14:16.878 "state": "online", 00:14:16.878 "raid_level": "raid1", 00:14:16.878 "superblock": false, 00:14:16.878 "num_base_bdevs": 2, 00:14:16.878 "num_base_bdevs_discovered": 2, 00:14:16.878 "num_base_bdevs_operational": 2, 00:14:16.878 "process": { 00:14:16.878 "type": "rebuild", 00:14:16.878 "target": "spare", 00:14:16.878 "progress": { 00:14:16.878 "blocks": 22528, 00:14:16.878 "percent": 34 00:14:16.878 } 00:14:16.878 }, 00:14:16.878 "base_bdevs_list": [ 00:14:16.878 { 00:14:16.878 "name": "spare", 00:14:16.878 "uuid": "1204d48d-2daa-5755-8792-8fc47a649bc9", 00:14:16.878 "is_configured": true, 00:14:16.878 "data_offset": 0, 00:14:16.878 "data_size": 65536 00:14:16.878 }, 00:14:16.878 { 00:14:16.878 "name": "BaseBdev2", 00:14:16.878 "uuid": "4b2fad7e-7a49-50e6-b6f0-2de9de8562c0", 00:14:16.878 "is_configured": true, 00:14:16.878 "data_offset": 0, 00:14:16.878 "data_size": 65536 00:14:16.878 } 00:14:16.878 ] 00:14:16.878 }' 00:14:16.878 14:13:53 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:16.878 14:13:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:16.878 14:13:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:16.878 14:13:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:16.878 14:13:54 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:17.814 14:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:17.814 14:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:17.814 14:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:17.814 14:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:17.814 14:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:17.814 14:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:17.814 14:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:17.814 14:13:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:17.814 14:13:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.814 14:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:17.814 14:13:55 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:18.073 14:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:18.073 "name": "raid_bdev1", 00:14:18.073 "uuid": "27131d98-84a0-4ec6-b496-47a1a9240aa5", 00:14:18.073 "strip_size_kb": 0, 00:14:18.073 "state": "online", 00:14:18.073 "raid_level": "raid1", 00:14:18.073 "superblock": false, 00:14:18.073 "num_base_bdevs": 2, 00:14:18.073 "num_base_bdevs_discovered": 2, 00:14:18.073 "num_base_bdevs_operational": 2, 00:14:18.073 "process": { 00:14:18.073 "type": "rebuild", 00:14:18.073 "target": "spare", 00:14:18.073 "progress": { 00:14:18.073 "blocks": 47104, 00:14:18.073 "percent": 71 00:14:18.073 } 00:14:18.073 }, 00:14:18.073 "base_bdevs_list": [ 00:14:18.073 { 00:14:18.073 "name": "spare", 00:14:18.073 "uuid": "1204d48d-2daa-5755-8792-8fc47a649bc9", 00:14:18.073 "is_configured": true, 00:14:18.073 "data_offset": 0, 00:14:18.073 "data_size": 65536 00:14:18.073 }, 00:14:18.073 { 00:14:18.073 "name": "BaseBdev2", 00:14:18.073 "uuid": "4b2fad7e-7a49-50e6-b6f0-2de9de8562c0", 00:14:18.073 "is_configured": true, 00:14:18.073 "data_offset": 0, 00:14:18.073 "data_size": 65536 00:14:18.073 } 00:14:18.073 ] 00:14:18.073 }' 00:14:18.073 14:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:18.073 14:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:18.073 14:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:18.073 14:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:18.073 14:13:55 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:19.010 [2024-11-27 14:13:55.935891] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:19.010 [2024-11-27 14:13:55.935992] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:19.010 [2024-11-27 14:13:55.936060] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:19.010 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:19.010 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:19.010 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.010 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:19.010 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:19.010 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.010 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.010 14:13:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.010 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.010 14:13:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.010 14:13:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.010 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.010 "name": "raid_bdev1", 00:14:19.011 "uuid": "27131d98-84a0-4ec6-b496-47a1a9240aa5", 00:14:19.011 "strip_size_kb": 0, 00:14:19.011 "state": "online", 00:14:19.011 "raid_level": "raid1", 00:14:19.011 "superblock": false, 00:14:19.011 "num_base_bdevs": 2, 00:14:19.011 "num_base_bdevs_discovered": 2, 00:14:19.011 "num_base_bdevs_operational": 2, 00:14:19.011 "base_bdevs_list": [ 00:14:19.011 { 00:14:19.011 "name": "spare", 00:14:19.011 "uuid": "1204d48d-2daa-5755-8792-8fc47a649bc9", 00:14:19.011 "is_configured": true, 00:14:19.011 "data_offset": 0, 00:14:19.011 "data_size": 65536 00:14:19.011 }, 00:14:19.011 { 00:14:19.011 "name": "BaseBdev2", 00:14:19.011 "uuid": "4b2fad7e-7a49-50e6-b6f0-2de9de8562c0", 00:14:19.011 "is_configured": true, 00:14:19.011 "data_offset": 0, 00:14:19.011 "data_size": 65536 00:14:19.011 } 00:14:19.011 ] 00:14:19.011 }' 00:14:19.011 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.271 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:19.271 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.271 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:19.271 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:14:19.271 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:19.271 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:19.271 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:19.271 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:19.271 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:19.271 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.271 14:13:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.271 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.271 14:13:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.271 14:13:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.271 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:19.271 "name": "raid_bdev1", 00:14:19.271 "uuid": "27131d98-84a0-4ec6-b496-47a1a9240aa5", 00:14:19.271 "strip_size_kb": 0, 00:14:19.271 "state": "online", 00:14:19.271 "raid_level": "raid1", 00:14:19.271 "superblock": false, 00:14:19.271 "num_base_bdevs": 2, 00:14:19.271 "num_base_bdevs_discovered": 2, 00:14:19.271 "num_base_bdevs_operational": 2, 00:14:19.271 "base_bdevs_list": [ 00:14:19.271 { 00:14:19.271 "name": "spare", 00:14:19.271 "uuid": "1204d48d-2daa-5755-8792-8fc47a649bc9", 00:14:19.271 "is_configured": true, 00:14:19.271 "data_offset": 0, 00:14:19.271 "data_size": 65536 00:14:19.271 }, 00:14:19.271 { 00:14:19.271 "name": "BaseBdev2", 00:14:19.271 "uuid": "4b2fad7e-7a49-50e6-b6f0-2de9de8562c0", 00:14:19.271 "is_configured": true, 00:14:19.271 "data_offset": 0, 00:14:19.271 "data_size": 65536 00:14:19.271 } 00:14:19.271 ] 00:14:19.271 }' 00:14:19.271 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:19.271 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:19.271 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:19.530 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:19.530 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:19.530 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:19.530 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:19.530 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:19.530 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:19.530 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:19.530 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:19.530 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:19.530 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:19.530 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:19.530 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:19.530 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:19.530 14:13:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:19.530 14:13:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:19.530 14:13:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:19.530 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:19.530 "name": "raid_bdev1", 00:14:19.530 "uuid": "27131d98-84a0-4ec6-b496-47a1a9240aa5", 00:14:19.530 "strip_size_kb": 0, 00:14:19.530 "state": "online", 00:14:19.530 "raid_level": "raid1", 00:14:19.530 "superblock": false, 00:14:19.530 "num_base_bdevs": 2, 00:14:19.530 "num_base_bdevs_discovered": 2, 00:14:19.530 "num_base_bdevs_operational": 2, 00:14:19.530 "base_bdevs_list": [ 00:14:19.530 { 00:14:19.530 "name": "spare", 00:14:19.530 "uuid": "1204d48d-2daa-5755-8792-8fc47a649bc9", 00:14:19.530 "is_configured": true, 00:14:19.530 "data_offset": 0, 00:14:19.530 "data_size": 65536 00:14:19.530 }, 00:14:19.530 { 00:14:19.530 "name": "BaseBdev2", 00:14:19.530 "uuid": "4b2fad7e-7a49-50e6-b6f0-2de9de8562c0", 00:14:19.530 "is_configured": true, 00:14:19.530 "data_offset": 0, 00:14:19.530 "data_size": 65536 00:14:19.530 } 00:14:19.530 ] 00:14:19.530 }' 00:14:19.530 14:13:56 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:19.530 14:13:56 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.099 14:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:20.099 14:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.099 14:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.099 [2024-11-27 14:13:57.082867] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:20.099 [2024-11-27 14:13:57.083034] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:20.099 [2024-11-27 14:13:57.083207] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:20.099 [2024-11-27 14:13:57.083296] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:20.099 [2024-11-27 14:13:57.083313] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:20.099 14:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.099 14:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:20.099 14:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:20.099 14:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:20.099 14:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:14:20.099 14:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:20.099 14:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:20.099 14:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:20.099 14:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:20.099 14:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:20.099 14:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:20.099 14:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:20.099 14:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:20.099 14:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:20.099 14:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:20.099 14:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:14:20.099 14:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:20.099 14:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:20.099 14:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:20.358 /dev/nbd0 00:14:20.358 14:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:20.358 14:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:20.358 14:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:20.358 14:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:20.358 14:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:20.358 14:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:20.358 14:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:20.358 14:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:20.358 14:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:20.358 14:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:20.358 14:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:20.358 1+0 records in 00:14:20.358 1+0 records out 00:14:20.358 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000791882 s, 5.2 MB/s 00:14:20.358 14:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.358 14:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:20.359 14:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.359 14:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:20.359 14:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:20.359 14:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:20.359 14:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:20.359 14:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:20.619 /dev/nbd1 00:14:20.619 14:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:20.619 14:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:20.619 14:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:20.619 14:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:14:20.619 14:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:20.619 14:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:20.619 14:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:20.619 14:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:14:20.619 14:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:20.619 14:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:20.619 14:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:20.619 1+0 records in 00:14:20.619 1+0 records out 00:14:20.619 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000529853 s, 7.7 MB/s 00:14:20.619 14:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.619 14:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:14:20.619 14:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:20.619 14:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:20.619 14:13:57 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:14:20.619 14:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:20.619 14:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:20.619 14:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:14:20.879 14:13:57 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:20.879 14:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:20.879 14:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:20.879 14:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:20.879 14:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:14:20.879 14:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:20.879 14:13:57 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:21.138 14:13:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:21.138 14:13:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:21.138 14:13:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:21.138 14:13:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:21.139 14:13:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:21.139 14:13:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:21.139 14:13:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:21.139 14:13:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:21.139 14:13:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:21.139 14:13:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:21.399 14:13:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:21.399 14:13:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:21.399 14:13:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:21.399 14:13:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:21.399 14:13:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:21.399 14:13:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:21.399 14:13:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:14:21.399 14:13:58 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:14:21.399 14:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:14:21.399 14:13:58 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 75379 00:14:21.399 14:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 75379 ']' 00:14:21.399 14:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 75379 00:14:21.399 14:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:14:21.399 14:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:21.399 14:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75379 00:14:21.658 killing process with pid 75379 00:14:21.658 Received shutdown signal, test time was about 60.000000 seconds 00:14:21.658 00:14:21.658 Latency(us) 00:14:21.658 [2024-11-27T14:13:58.936Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:21.658 [2024-11-27T14:13:58.936Z] =================================================================================================================== 00:14:21.658 [2024-11-27T14:13:58.936Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:21.658 14:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:21.658 14:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:21.658 14:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75379' 00:14:21.658 14:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 75379 00:14:21.658 [2024-11-27 14:13:58.678540] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:21.658 14:13:58 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 75379 00:14:21.917 [2024-11-27 14:13:58.946516] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:22.851 ************************************ 00:14:22.851 END TEST raid_rebuild_test 00:14:22.851 ************************************ 00:14:22.851 14:13:59 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:14:22.851 00:14:22.851 real 0m18.750s 00:14:22.851 user 0m21.252s 00:14:22.851 sys 0m3.442s 00:14:22.851 14:13:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:22.851 14:13:59 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.851 14:14:00 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 2 true false true 00:14:22.851 14:14:00 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:22.851 14:14:00 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:22.851 14:14:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:22.851 ************************************ 00:14:22.851 START TEST raid_rebuild_test_sb 00:14:22.851 ************************************ 00:14:22.851 14:14:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:14:22.851 14:14:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:22.851 14:14:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:22.851 14:14:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:14:22.851 14:14:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:14:22.851 14:14:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:22.851 14:14:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:22.851 14:14:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:22.851 14:14:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:22.851 14:14:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:22.851 14:14:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:22.851 14:14:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:22.851 14:14:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:22.851 14:14:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:22.851 14:14:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:22.851 14:14:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:22.851 14:14:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:22.851 14:14:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:22.851 14:14:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:22.851 14:14:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:22.851 14:14:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:22.851 14:14:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:22.851 14:14:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:22.851 14:14:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:14:22.851 14:14:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:14:22.851 14:14:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=75836 00:14:22.851 14:14:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 75836 00:14:22.851 14:14:00 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:22.851 14:14:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 75836 ']' 00:14:22.851 14:14:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:22.851 14:14:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:22.851 14:14:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:22.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:22.851 14:14:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:22.851 14:14:00 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.110 [2024-11-27 14:14:00.140159] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:14:23.110 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:23.110 Zero copy mechanism will not be used. 00:14:23.110 [2024-11-27 14:14:00.141114] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75836 ] 00:14:23.110 [2024-11-27 14:14:00.338161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:23.368 [2024-11-27 14:14:00.496169] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.687 [2024-11-27 14:14:00.705216] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:23.687 [2024-11-27 14:14:00.705293] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:23.946 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:23.946 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:14:23.946 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:23.946 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:23.946 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.946 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.946 BaseBdev1_malloc 00:14:23.946 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.946 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:23.946 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.946 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.946 [2024-11-27 14:14:01.159324] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:23.946 [2024-11-27 14:14:01.159554] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.946 [2024-11-27 14:14:01.159613] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:23.946 [2024-11-27 14:14:01.159643] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.946 [2024-11-27 14:14:01.162458] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.946 [2024-11-27 14:14:01.162509] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:23.946 BaseBdev1 00:14:23.946 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.946 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:23.946 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:23.946 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.946 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.946 BaseBdev2_malloc 00:14:23.946 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:23.946 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:23.946 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:23.946 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:23.946 [2024-11-27 14:14:01.216981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:23.946 [2024-11-27 14:14:01.217070] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:23.946 [2024-11-27 14:14:01.217106] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:23.946 [2024-11-27 14:14:01.217125] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:23.946 [2024-11-27 14:14:01.219990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:23.946 [2024-11-27 14:14:01.220041] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:24.206 BaseBdev2 00:14:24.206 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.206 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:24.206 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.206 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.206 spare_malloc 00:14:24.206 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.206 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:24.206 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.206 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.206 spare_delay 00:14:24.206 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.206 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:24.206 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.206 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.206 [2024-11-27 14:14:01.285020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:24.206 [2024-11-27 14:14:01.285097] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:24.206 [2024-11-27 14:14:01.285129] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:24.206 [2024-11-27 14:14:01.285148] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:24.206 [2024-11-27 14:14:01.288028] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:24.206 [2024-11-27 14:14:01.288082] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:24.206 spare 00:14:24.206 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.206 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:24.206 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.206 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.206 [2024-11-27 14:14:01.293090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:24.206 [2024-11-27 14:14:01.295564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:24.206 [2024-11-27 14:14:01.296040] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:24.206 [2024-11-27 14:14:01.296074] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:24.206 [2024-11-27 14:14:01.296432] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:24.206 [2024-11-27 14:14:01.296678] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:24.206 [2024-11-27 14:14:01.296695] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:24.206 [2024-11-27 14:14:01.297022] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:24.206 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.206 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:24.206 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:24.206 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:24.206 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:24.206 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:24.206 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:24.207 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:24.207 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:24.207 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:24.207 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:24.207 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.207 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:24.207 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.207 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.207 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.207 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:24.207 "name": "raid_bdev1", 00:14:24.207 "uuid": "daca40d7-a46a-4696-b76a-c6576ccb250b", 00:14:24.207 "strip_size_kb": 0, 00:14:24.207 "state": "online", 00:14:24.207 "raid_level": "raid1", 00:14:24.207 "superblock": true, 00:14:24.207 "num_base_bdevs": 2, 00:14:24.207 "num_base_bdevs_discovered": 2, 00:14:24.207 "num_base_bdevs_operational": 2, 00:14:24.207 "base_bdevs_list": [ 00:14:24.207 { 00:14:24.207 "name": "BaseBdev1", 00:14:24.207 "uuid": "f8477464-ee94-58c6-8c3c-182ae018496a", 00:14:24.207 "is_configured": true, 00:14:24.207 "data_offset": 2048, 00:14:24.207 "data_size": 63488 00:14:24.207 }, 00:14:24.207 { 00:14:24.207 "name": "BaseBdev2", 00:14:24.207 "uuid": "844ed52d-65ac-5db0-921c-a877dc713c3e", 00:14:24.207 "is_configured": true, 00:14:24.207 "data_offset": 2048, 00:14:24.207 "data_size": 63488 00:14:24.207 } 00:14:24.207 ] 00:14:24.207 }' 00:14:24.207 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:24.207 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.776 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:24.776 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.776 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.776 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:24.776 [2024-11-27 14:14:01.781566] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:24.776 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.776 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:14:24.776 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:24.776 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:24.776 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:24.776 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:24.776 14:14:01 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:24.776 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:14:24.776 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:14:24.776 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:14:24.776 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:14:24.776 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:14:24.776 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:24.776 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:14:24.776 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:24.776 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:14:24.776 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:24.776 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:24.776 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:24.776 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:24.776 14:14:01 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:14:25.036 [2024-11-27 14:14:02.149422] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:14:25.036 /dev/nbd0 00:14:25.036 14:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:25.036 14:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:25.036 14:14:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:25.036 14:14:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:25.036 14:14:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:25.036 14:14:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:25.036 14:14:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:25.036 14:14:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:25.036 14:14:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:25.036 14:14:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:25.036 14:14:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:25.036 1+0 records in 00:14:25.036 1+0 records out 00:14:25.036 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000485938 s, 8.4 MB/s 00:14:25.036 14:14:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.036 14:14:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:25.036 14:14:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:25.036 14:14:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:25.036 14:14:02 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:25.036 14:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:25.036 14:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:14:25.036 14:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:14:25.036 14:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:14:25.036 14:14:02 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:14:31.595 63488+0 records in 00:14:31.595 63488+0 records out 00:14:31.595 32505856 bytes (33 MB, 31 MiB) copied, 6.62881 s, 4.9 MB/s 00:14:31.595 14:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:14:31.595 14:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:31.595 14:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:14:31.595 14:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:31.595 14:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:31.595 14:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:31.595 14:14:08 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:32.160 14:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:32.160 [2024-11-27 14:14:09.140439] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:32.160 14:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:32.160 14:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:32.160 14:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:32.160 14:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:32.160 14:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:32.160 14:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:32.160 14:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:32.160 14:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:32.160 14:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.160 14:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.160 [2024-11-27 14:14:09.152542] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:32.160 14:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.160 14:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:32.160 14:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:32.160 14:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:32.160 14:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:32.160 14:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:32.161 14:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:32.161 14:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:32.161 14:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:32.161 14:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:32.161 14:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:32.161 14:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:32.161 14:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.161 14:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:32.161 14:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.161 14:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.161 14:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:32.161 "name": "raid_bdev1", 00:14:32.161 "uuid": "daca40d7-a46a-4696-b76a-c6576ccb250b", 00:14:32.161 "strip_size_kb": 0, 00:14:32.161 "state": "online", 00:14:32.161 "raid_level": "raid1", 00:14:32.161 "superblock": true, 00:14:32.161 "num_base_bdevs": 2, 00:14:32.161 "num_base_bdevs_discovered": 1, 00:14:32.161 "num_base_bdevs_operational": 1, 00:14:32.161 "base_bdevs_list": [ 00:14:32.161 { 00:14:32.161 "name": null, 00:14:32.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:32.161 "is_configured": false, 00:14:32.161 "data_offset": 0, 00:14:32.161 "data_size": 63488 00:14:32.161 }, 00:14:32.161 { 00:14:32.161 "name": "BaseBdev2", 00:14:32.161 "uuid": "844ed52d-65ac-5db0-921c-a877dc713c3e", 00:14:32.161 "is_configured": true, 00:14:32.161 "data_offset": 2048, 00:14:32.161 "data_size": 63488 00:14:32.161 } 00:14:32.161 ] 00:14:32.161 }' 00:14:32.161 14:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:32.161 14:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.488 14:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:32.488 14:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:32.488 14:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.488 [2024-11-27 14:14:09.640743] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:32.488 [2024-11-27 14:14:09.657675] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3360 00:14:32.488 14:14:09 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:32.488 14:14:09 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:32.488 [2024-11-27 14:14:09.660352] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:33.437 14:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:33.437 14:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:33.437 14:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:33.437 14:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:33.437 14:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:33.437 14:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.437 14:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.437 14:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.437 14:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.437 14:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.696 14:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:33.696 "name": "raid_bdev1", 00:14:33.696 "uuid": "daca40d7-a46a-4696-b76a-c6576ccb250b", 00:14:33.696 "strip_size_kb": 0, 00:14:33.696 "state": "online", 00:14:33.696 "raid_level": "raid1", 00:14:33.696 "superblock": true, 00:14:33.696 "num_base_bdevs": 2, 00:14:33.696 "num_base_bdevs_discovered": 2, 00:14:33.696 "num_base_bdevs_operational": 2, 00:14:33.696 "process": { 00:14:33.696 "type": "rebuild", 00:14:33.696 "target": "spare", 00:14:33.696 "progress": { 00:14:33.696 "blocks": 20480, 00:14:33.696 "percent": 32 00:14:33.696 } 00:14:33.696 }, 00:14:33.696 "base_bdevs_list": [ 00:14:33.696 { 00:14:33.696 "name": "spare", 00:14:33.696 "uuid": "d280eb63-7e52-50ee-833d-122ce475e902", 00:14:33.696 "is_configured": true, 00:14:33.696 "data_offset": 2048, 00:14:33.696 "data_size": 63488 00:14:33.696 }, 00:14:33.696 { 00:14:33.696 "name": "BaseBdev2", 00:14:33.696 "uuid": "844ed52d-65ac-5db0-921c-a877dc713c3e", 00:14:33.696 "is_configured": true, 00:14:33.696 "data_offset": 2048, 00:14:33.696 "data_size": 63488 00:14:33.696 } 00:14:33.696 ] 00:14:33.696 }' 00:14:33.696 14:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:33.696 14:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:33.696 14:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:33.696 14:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:33.696 14:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:33.696 14:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.696 14:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.696 [2024-11-27 14:14:10.825648] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:33.696 [2024-11-27 14:14:10.869640] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:33.696 [2024-11-27 14:14:10.869924] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:33.696 [2024-11-27 14:14:10.870074] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:33.696 [2024-11-27 14:14:10.870135] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:33.696 14:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.696 14:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:33.696 14:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:33.696 14:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:33.696 14:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:33.696 14:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:33.696 14:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:33.696 14:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:33.696 14:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:33.696 14:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:33.696 14:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:33.696 14:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:33.696 14:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:33.696 14:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:33.696 14:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.696 14:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:33.696 14:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:33.696 "name": "raid_bdev1", 00:14:33.696 "uuid": "daca40d7-a46a-4696-b76a-c6576ccb250b", 00:14:33.696 "strip_size_kb": 0, 00:14:33.696 "state": "online", 00:14:33.696 "raid_level": "raid1", 00:14:33.696 "superblock": true, 00:14:33.696 "num_base_bdevs": 2, 00:14:33.696 "num_base_bdevs_discovered": 1, 00:14:33.696 "num_base_bdevs_operational": 1, 00:14:33.696 "base_bdevs_list": [ 00:14:33.696 { 00:14:33.696 "name": null, 00:14:33.696 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.696 "is_configured": false, 00:14:33.696 "data_offset": 0, 00:14:33.696 "data_size": 63488 00:14:33.696 }, 00:14:33.696 { 00:14:33.696 "name": "BaseBdev2", 00:14:33.696 "uuid": "844ed52d-65ac-5db0-921c-a877dc713c3e", 00:14:33.696 "is_configured": true, 00:14:33.696 "data_offset": 2048, 00:14:33.696 "data_size": 63488 00:14:33.696 } 00:14:33.696 ] 00:14:33.696 }' 00:14:33.696 14:14:10 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:33.696 14:14:10 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.263 14:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:34.263 14:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:34.263 14:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:34.263 14:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:34.263 14:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:34.263 14:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:34.263 14:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:34.263 14:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.263 14:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.263 14:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.263 14:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:34.263 "name": "raid_bdev1", 00:14:34.263 "uuid": "daca40d7-a46a-4696-b76a-c6576ccb250b", 00:14:34.263 "strip_size_kb": 0, 00:14:34.263 "state": "online", 00:14:34.263 "raid_level": "raid1", 00:14:34.263 "superblock": true, 00:14:34.263 "num_base_bdevs": 2, 00:14:34.263 "num_base_bdevs_discovered": 1, 00:14:34.263 "num_base_bdevs_operational": 1, 00:14:34.263 "base_bdevs_list": [ 00:14:34.263 { 00:14:34.263 "name": null, 00:14:34.263 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:34.263 "is_configured": false, 00:14:34.263 "data_offset": 0, 00:14:34.263 "data_size": 63488 00:14:34.263 }, 00:14:34.263 { 00:14:34.263 "name": "BaseBdev2", 00:14:34.263 "uuid": "844ed52d-65ac-5db0-921c-a877dc713c3e", 00:14:34.263 "is_configured": true, 00:14:34.263 "data_offset": 2048, 00:14:34.263 "data_size": 63488 00:14:34.263 } 00:14:34.263 ] 00:14:34.263 }' 00:14:34.263 14:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:34.263 14:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:34.263 14:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:34.523 14:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:34.523 14:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:34.523 14:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:34.523 14:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:34.523 [2024-11-27 14:14:11.570773] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:34.523 [2024-11-27 14:14:11.586389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3430 00:14:34.523 14:14:11 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:34.523 14:14:11 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:34.523 [2024-11-27 14:14:11.588947] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:35.459 14:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:35.459 14:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.459 14:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:35.459 14:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:35.459 14:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.459 14:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.459 14:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.459 14:14:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.459 14:14:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.459 14:14:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.459 14:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.459 "name": "raid_bdev1", 00:14:35.459 "uuid": "daca40d7-a46a-4696-b76a-c6576ccb250b", 00:14:35.459 "strip_size_kb": 0, 00:14:35.459 "state": "online", 00:14:35.459 "raid_level": "raid1", 00:14:35.459 "superblock": true, 00:14:35.459 "num_base_bdevs": 2, 00:14:35.459 "num_base_bdevs_discovered": 2, 00:14:35.459 "num_base_bdevs_operational": 2, 00:14:35.459 "process": { 00:14:35.459 "type": "rebuild", 00:14:35.459 "target": "spare", 00:14:35.459 "progress": { 00:14:35.459 "blocks": 20480, 00:14:35.459 "percent": 32 00:14:35.459 } 00:14:35.459 }, 00:14:35.459 "base_bdevs_list": [ 00:14:35.459 { 00:14:35.459 "name": "spare", 00:14:35.459 "uuid": "d280eb63-7e52-50ee-833d-122ce475e902", 00:14:35.459 "is_configured": true, 00:14:35.459 "data_offset": 2048, 00:14:35.459 "data_size": 63488 00:14:35.459 }, 00:14:35.459 { 00:14:35.459 "name": "BaseBdev2", 00:14:35.459 "uuid": "844ed52d-65ac-5db0-921c-a877dc713c3e", 00:14:35.459 "is_configured": true, 00:14:35.459 "data_offset": 2048, 00:14:35.459 "data_size": 63488 00:14:35.459 } 00:14:35.459 ] 00:14:35.459 }' 00:14:35.459 14:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.459 14:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:35.459 14:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.718 14:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:35.718 14:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:14:35.718 14:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:14:35.718 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:14:35.718 14:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:35.718 14:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:35.718 14:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:35.718 14:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=419 00:14:35.718 14:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:35.718 14:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:35.718 14:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:35.718 14:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:35.718 14:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:35.718 14:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:35.718 14:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:35.718 14:14:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:35.718 14:14:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.718 14:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:35.718 14:14:12 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:35.718 14:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:35.718 "name": "raid_bdev1", 00:14:35.718 "uuid": "daca40d7-a46a-4696-b76a-c6576ccb250b", 00:14:35.718 "strip_size_kb": 0, 00:14:35.718 "state": "online", 00:14:35.718 "raid_level": "raid1", 00:14:35.718 "superblock": true, 00:14:35.718 "num_base_bdevs": 2, 00:14:35.718 "num_base_bdevs_discovered": 2, 00:14:35.718 "num_base_bdevs_operational": 2, 00:14:35.718 "process": { 00:14:35.718 "type": "rebuild", 00:14:35.718 "target": "spare", 00:14:35.718 "progress": { 00:14:35.718 "blocks": 22528, 00:14:35.718 "percent": 35 00:14:35.718 } 00:14:35.718 }, 00:14:35.718 "base_bdevs_list": [ 00:14:35.718 { 00:14:35.718 "name": "spare", 00:14:35.718 "uuid": "d280eb63-7e52-50ee-833d-122ce475e902", 00:14:35.718 "is_configured": true, 00:14:35.718 "data_offset": 2048, 00:14:35.718 "data_size": 63488 00:14:35.718 }, 00:14:35.718 { 00:14:35.718 "name": "BaseBdev2", 00:14:35.718 "uuid": "844ed52d-65ac-5db0-921c-a877dc713c3e", 00:14:35.718 "is_configured": true, 00:14:35.718 "data_offset": 2048, 00:14:35.718 "data_size": 63488 00:14:35.718 } 00:14:35.718 ] 00:14:35.718 }' 00:14:35.718 14:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:35.718 14:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:35.718 14:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:35.718 14:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:35.718 14:14:12 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:36.654 14:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:36.654 14:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:36.654 14:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:36.654 14:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:36.654 14:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:36.654 14:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:36.654 14:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:36.654 14:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:36.654 14:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:36.654 14:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.654 14:14:13 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:36.912 14:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:36.912 "name": "raid_bdev1", 00:14:36.912 "uuid": "daca40d7-a46a-4696-b76a-c6576ccb250b", 00:14:36.912 "strip_size_kb": 0, 00:14:36.912 "state": "online", 00:14:36.912 "raid_level": "raid1", 00:14:36.912 "superblock": true, 00:14:36.912 "num_base_bdevs": 2, 00:14:36.912 "num_base_bdevs_discovered": 2, 00:14:36.912 "num_base_bdevs_operational": 2, 00:14:36.912 "process": { 00:14:36.912 "type": "rebuild", 00:14:36.912 "target": "spare", 00:14:36.912 "progress": { 00:14:36.912 "blocks": 47104, 00:14:36.912 "percent": 74 00:14:36.912 } 00:14:36.912 }, 00:14:36.912 "base_bdevs_list": [ 00:14:36.912 { 00:14:36.912 "name": "spare", 00:14:36.912 "uuid": "d280eb63-7e52-50ee-833d-122ce475e902", 00:14:36.912 "is_configured": true, 00:14:36.912 "data_offset": 2048, 00:14:36.912 "data_size": 63488 00:14:36.912 }, 00:14:36.912 { 00:14:36.912 "name": "BaseBdev2", 00:14:36.912 "uuid": "844ed52d-65ac-5db0-921c-a877dc713c3e", 00:14:36.912 "is_configured": true, 00:14:36.912 "data_offset": 2048, 00:14:36.912 "data_size": 63488 00:14:36.912 } 00:14:36.912 ] 00:14:36.912 }' 00:14:36.912 14:14:13 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:36.912 14:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:36.912 14:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:36.912 14:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:36.912 14:14:14 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:37.528 [2024-11-27 14:14:14.713226] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:37.528 [2024-11-27 14:14:14.713330] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:37.528 [2024-11-27 14:14:14.713490] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:38.095 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:38.095 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:38.095 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.095 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:38.095 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:38.095 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.095 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.095 14:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.095 14:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.095 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.095 14:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.095 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.095 "name": "raid_bdev1", 00:14:38.095 "uuid": "daca40d7-a46a-4696-b76a-c6576ccb250b", 00:14:38.095 "strip_size_kb": 0, 00:14:38.095 "state": "online", 00:14:38.095 "raid_level": "raid1", 00:14:38.095 "superblock": true, 00:14:38.095 "num_base_bdevs": 2, 00:14:38.095 "num_base_bdevs_discovered": 2, 00:14:38.095 "num_base_bdevs_operational": 2, 00:14:38.095 "base_bdevs_list": [ 00:14:38.095 { 00:14:38.095 "name": "spare", 00:14:38.095 "uuid": "d280eb63-7e52-50ee-833d-122ce475e902", 00:14:38.095 "is_configured": true, 00:14:38.095 "data_offset": 2048, 00:14:38.095 "data_size": 63488 00:14:38.095 }, 00:14:38.095 { 00:14:38.095 "name": "BaseBdev2", 00:14:38.095 "uuid": "844ed52d-65ac-5db0-921c-a877dc713c3e", 00:14:38.095 "is_configured": true, 00:14:38.095 "data_offset": 2048, 00:14:38.095 "data_size": 63488 00:14:38.095 } 00:14:38.095 ] 00:14:38.095 }' 00:14:38.095 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.095 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:38.095 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.095 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:38.095 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:14:38.095 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:38.095 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:38.095 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:38.096 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:38.096 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:38.096 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.096 14:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.096 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.096 14:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.096 14:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.096 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:38.096 "name": "raid_bdev1", 00:14:38.096 "uuid": "daca40d7-a46a-4696-b76a-c6576ccb250b", 00:14:38.096 "strip_size_kb": 0, 00:14:38.096 "state": "online", 00:14:38.096 "raid_level": "raid1", 00:14:38.096 "superblock": true, 00:14:38.096 "num_base_bdevs": 2, 00:14:38.096 "num_base_bdevs_discovered": 2, 00:14:38.096 "num_base_bdevs_operational": 2, 00:14:38.096 "base_bdevs_list": [ 00:14:38.096 { 00:14:38.096 "name": "spare", 00:14:38.096 "uuid": "d280eb63-7e52-50ee-833d-122ce475e902", 00:14:38.096 "is_configured": true, 00:14:38.096 "data_offset": 2048, 00:14:38.096 "data_size": 63488 00:14:38.096 }, 00:14:38.096 { 00:14:38.096 "name": "BaseBdev2", 00:14:38.096 "uuid": "844ed52d-65ac-5db0-921c-a877dc713c3e", 00:14:38.096 "is_configured": true, 00:14:38.096 "data_offset": 2048, 00:14:38.096 "data_size": 63488 00:14:38.096 } 00:14:38.096 ] 00:14:38.096 }' 00:14:38.096 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:38.096 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:38.096 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:38.355 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:38.356 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:38.356 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:38.356 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:38.356 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:38.356 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:38.356 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:38.356 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:38.356 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:38.356 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:38.356 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:38.356 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.356 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:38.356 14:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.356 14:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.356 14:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.356 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:38.356 "name": "raid_bdev1", 00:14:38.356 "uuid": "daca40d7-a46a-4696-b76a-c6576ccb250b", 00:14:38.356 "strip_size_kb": 0, 00:14:38.356 "state": "online", 00:14:38.356 "raid_level": "raid1", 00:14:38.356 "superblock": true, 00:14:38.356 "num_base_bdevs": 2, 00:14:38.356 "num_base_bdevs_discovered": 2, 00:14:38.356 "num_base_bdevs_operational": 2, 00:14:38.356 "base_bdevs_list": [ 00:14:38.356 { 00:14:38.356 "name": "spare", 00:14:38.357 "uuid": "d280eb63-7e52-50ee-833d-122ce475e902", 00:14:38.357 "is_configured": true, 00:14:38.357 "data_offset": 2048, 00:14:38.357 "data_size": 63488 00:14:38.357 }, 00:14:38.357 { 00:14:38.357 "name": "BaseBdev2", 00:14:38.357 "uuid": "844ed52d-65ac-5db0-921c-a877dc713c3e", 00:14:38.357 "is_configured": true, 00:14:38.357 "data_offset": 2048, 00:14:38.357 "data_size": 63488 00:14:38.357 } 00:14:38.357 ] 00:14:38.357 }' 00:14:38.357 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:38.357 14:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.925 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:14:38.925 14:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.925 14:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.925 [2024-11-27 14:14:15.901081] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:38.925 [2024-11-27 14:14:15.901135] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:38.925 [2024-11-27 14:14:15.901269] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:38.925 [2024-11-27 14:14:15.901354] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:38.925 [2024-11-27 14:14:15.901372] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:14:38.925 14:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.925 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:38.925 14:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:38.925 14:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.925 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:14:38.925 14:14:15 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:38.925 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:14:38.925 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:14:38.925 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:14:38.925 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:14:38.925 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:14:38.925 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:14:38.925 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:14:38.925 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:38.925 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:14:38.925 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:14:38.925 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:14:38.925 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:38.925 14:14:15 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:14:39.185 /dev/nbd0 00:14:39.185 14:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:14:39.185 14:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:14:39.185 14:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:14:39.185 14:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:39.185 14:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:39.185 14:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:39.185 14:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:14:39.185 14:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:39.185 14:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:39.185 14:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:39.185 14:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:39.185 1+0 records in 00:14:39.185 1+0 records out 00:14:39.185 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000594843 s, 6.9 MB/s 00:14:39.185 14:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:39.185 14:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:39.185 14:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:39.185 14:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:39.185 14:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:39.185 14:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:39.185 14:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:39.185 14:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:14:39.444 /dev/nbd1 00:14:39.444 14:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:14:39.444 14:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:14:39.444 14:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:14:39.444 14:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:14:39.444 14:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:14:39.444 14:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:14:39.444 14:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:14:39.444 14:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:14:39.444 14:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:14:39.444 14:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:14:39.444 14:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:14:39.444 1+0 records in 00:14:39.444 1+0 records out 00:14:39.444 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000447894 s, 9.1 MB/s 00:14:39.444 14:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:39.444 14:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:14:39.444 14:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:14:39.444 14:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:14:39.444 14:14:16 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:14:39.444 14:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:14:39.444 14:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:14:39.444 14:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:14:39.702 14:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:14:39.702 14:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:14:39.702 14:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:14:39.702 14:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:14:39.702 14:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:14:39.702 14:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.702 14:14:16 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:14:39.961 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:14:39.961 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:14:39.961 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:14:39.961 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:39.961 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:39.961 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:14:39.961 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:39.961 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:39.961 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:14:39.961 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:14:40.220 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:14:40.220 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:14:40.220 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:14:40.220 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:14:40.220 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:14:40.220 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:14:40.220 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:14:40.220 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:14:40.220 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:14:40.220 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:14:40.220 14:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.220 14:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.220 14:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.220 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:40.220 14:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.220 14:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.220 [2024-11-27 14:14:17.383288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:40.220 [2024-11-27 14:14:17.383383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:40.220 [2024-11-27 14:14:17.383422] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:14:40.220 [2024-11-27 14:14:17.383437] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:40.220 [2024-11-27 14:14:17.386407] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:40.220 [2024-11-27 14:14:17.386455] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:40.220 [2024-11-27 14:14:17.386571] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:40.220 [2024-11-27 14:14:17.386645] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:40.220 [2024-11-27 14:14:17.386862] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:40.220 spare 00:14:40.220 14:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.220 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:14:40.220 14:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.220 14:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.220 [2024-11-27 14:14:17.487020] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:14:40.220 [2024-11-27 14:14:17.487291] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:14:40.220 [2024-11-27 14:14:17.487701] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1ae0 00:14:40.220 [2024-11-27 14:14:17.488050] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:14:40.220 [2024-11-27 14:14:17.488068] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:14:40.220 [2024-11-27 14:14:17.488375] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:40.220 14:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.220 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:40.220 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:40.220 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:40.220 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:40.220 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:40.221 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:40.221 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:40.221 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:40.221 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:40.221 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:40.479 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:40.479 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:40.479 14:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.479 14:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.479 14:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.479 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:40.479 "name": "raid_bdev1", 00:14:40.479 "uuid": "daca40d7-a46a-4696-b76a-c6576ccb250b", 00:14:40.479 "strip_size_kb": 0, 00:14:40.479 "state": "online", 00:14:40.479 "raid_level": "raid1", 00:14:40.479 "superblock": true, 00:14:40.479 "num_base_bdevs": 2, 00:14:40.479 "num_base_bdevs_discovered": 2, 00:14:40.479 "num_base_bdevs_operational": 2, 00:14:40.479 "base_bdevs_list": [ 00:14:40.479 { 00:14:40.479 "name": "spare", 00:14:40.479 "uuid": "d280eb63-7e52-50ee-833d-122ce475e902", 00:14:40.479 "is_configured": true, 00:14:40.479 "data_offset": 2048, 00:14:40.479 "data_size": 63488 00:14:40.479 }, 00:14:40.479 { 00:14:40.479 "name": "BaseBdev2", 00:14:40.479 "uuid": "844ed52d-65ac-5db0-921c-a877dc713c3e", 00:14:40.479 "is_configured": true, 00:14:40.479 "data_offset": 2048, 00:14:40.479 "data_size": 63488 00:14:40.479 } 00:14:40.479 ] 00:14:40.479 }' 00:14:40.479 14:14:17 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:40.479 14:14:17 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.046 14:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:41.046 14:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:41.046 14:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:41.046 14:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:41.046 14:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:41.046 14:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.046 14:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.046 14:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.046 14:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.046 14:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.046 14:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:41.046 "name": "raid_bdev1", 00:14:41.046 "uuid": "daca40d7-a46a-4696-b76a-c6576ccb250b", 00:14:41.046 "strip_size_kb": 0, 00:14:41.046 "state": "online", 00:14:41.046 "raid_level": "raid1", 00:14:41.046 "superblock": true, 00:14:41.046 "num_base_bdevs": 2, 00:14:41.046 "num_base_bdevs_discovered": 2, 00:14:41.046 "num_base_bdevs_operational": 2, 00:14:41.046 "base_bdevs_list": [ 00:14:41.046 { 00:14:41.046 "name": "spare", 00:14:41.046 "uuid": "d280eb63-7e52-50ee-833d-122ce475e902", 00:14:41.046 "is_configured": true, 00:14:41.046 "data_offset": 2048, 00:14:41.046 "data_size": 63488 00:14:41.046 }, 00:14:41.046 { 00:14:41.046 "name": "BaseBdev2", 00:14:41.046 "uuid": "844ed52d-65ac-5db0-921c-a877dc713c3e", 00:14:41.046 "is_configured": true, 00:14:41.046 "data_offset": 2048, 00:14:41.046 "data_size": 63488 00:14:41.046 } 00:14:41.046 ] 00:14:41.046 }' 00:14:41.046 14:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:41.046 14:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:41.046 14:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:41.046 14:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:41.046 14:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.046 14:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.046 14:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.046 14:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:14:41.046 14:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.046 14:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:14:41.046 14:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:41.046 14:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.046 14:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.046 [2024-11-27 14:14:18.244603] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:41.046 14:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.046 14:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:41.046 14:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:41.046 14:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:41.047 14:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:41.047 14:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:41.047 14:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:41.047 14:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:41.047 14:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:41.047 14:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:41.047 14:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:41.047 14:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:41.047 14:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.047 14:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.047 14:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:41.047 14:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.047 14:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:41.047 "name": "raid_bdev1", 00:14:41.047 "uuid": "daca40d7-a46a-4696-b76a-c6576ccb250b", 00:14:41.047 "strip_size_kb": 0, 00:14:41.047 "state": "online", 00:14:41.047 "raid_level": "raid1", 00:14:41.047 "superblock": true, 00:14:41.047 "num_base_bdevs": 2, 00:14:41.047 "num_base_bdevs_discovered": 1, 00:14:41.047 "num_base_bdevs_operational": 1, 00:14:41.047 "base_bdevs_list": [ 00:14:41.047 { 00:14:41.047 "name": null, 00:14:41.047 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:41.047 "is_configured": false, 00:14:41.047 "data_offset": 0, 00:14:41.047 "data_size": 63488 00:14:41.047 }, 00:14:41.047 { 00:14:41.047 "name": "BaseBdev2", 00:14:41.047 "uuid": "844ed52d-65ac-5db0-921c-a877dc713c3e", 00:14:41.047 "is_configured": true, 00:14:41.047 "data_offset": 2048, 00:14:41.047 "data_size": 63488 00:14:41.047 } 00:14:41.047 ] 00:14:41.047 }' 00:14:41.047 14:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:41.047 14:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.614 14:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:41.614 14:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.614 14:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:41.614 [2024-11-27 14:14:18.828876] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:41.614 [2024-11-27 14:14:18.829132] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:41.614 [2024-11-27 14:14:18.829160] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:41.614 [2024-11-27 14:14:18.829214] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:41.614 [2024-11-27 14:14:18.845585] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1bb0 00:14:41.614 14:14:18 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.614 14:14:18 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:14:41.614 [2024-11-27 14:14:18.848401] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:42.988 14:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:42.988 14:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:42.988 14:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:42.988 14:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:42.988 14:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:42.988 14:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.988 14:14:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.988 14:14:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.988 14:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.988 14:14:19 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.988 14:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:42.988 "name": "raid_bdev1", 00:14:42.988 "uuid": "daca40d7-a46a-4696-b76a-c6576ccb250b", 00:14:42.988 "strip_size_kb": 0, 00:14:42.988 "state": "online", 00:14:42.988 "raid_level": "raid1", 00:14:42.988 "superblock": true, 00:14:42.988 "num_base_bdevs": 2, 00:14:42.988 "num_base_bdevs_discovered": 2, 00:14:42.988 "num_base_bdevs_operational": 2, 00:14:42.988 "process": { 00:14:42.988 "type": "rebuild", 00:14:42.988 "target": "spare", 00:14:42.988 "progress": { 00:14:42.988 "blocks": 20480, 00:14:42.988 "percent": 32 00:14:42.988 } 00:14:42.988 }, 00:14:42.988 "base_bdevs_list": [ 00:14:42.988 { 00:14:42.988 "name": "spare", 00:14:42.988 "uuid": "d280eb63-7e52-50ee-833d-122ce475e902", 00:14:42.988 "is_configured": true, 00:14:42.988 "data_offset": 2048, 00:14:42.988 "data_size": 63488 00:14:42.988 }, 00:14:42.988 { 00:14:42.988 "name": "BaseBdev2", 00:14:42.988 "uuid": "844ed52d-65ac-5db0-921c-a877dc713c3e", 00:14:42.988 "is_configured": true, 00:14:42.988 "data_offset": 2048, 00:14:42.988 "data_size": 63488 00:14:42.988 } 00:14:42.988 ] 00:14:42.988 }' 00:14:42.988 14:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:42.988 14:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:42.988 14:14:19 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:42.988 14:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:42.988 14:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:14:42.988 14:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.988 14:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.988 [2024-11-27 14:14:20.021901] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:42.988 [2024-11-27 14:14:20.057727] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:42.988 [2024-11-27 14:14:20.057899] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:42.988 [2024-11-27 14:14:20.057923] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:42.988 [2024-11-27 14:14:20.057939] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:42.988 14:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.988 14:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:42.988 14:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:42.988 14:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:42.988 14:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:42.988 14:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:42.988 14:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:42.988 14:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:42.988 14:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:42.988 14:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:42.988 14:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:42.988 14:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:42.988 14:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:42.988 14:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:42.988 14:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.988 14:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:42.988 14:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:42.988 "name": "raid_bdev1", 00:14:42.988 "uuid": "daca40d7-a46a-4696-b76a-c6576ccb250b", 00:14:42.988 "strip_size_kb": 0, 00:14:42.988 "state": "online", 00:14:42.988 "raid_level": "raid1", 00:14:42.988 "superblock": true, 00:14:42.988 "num_base_bdevs": 2, 00:14:42.988 "num_base_bdevs_discovered": 1, 00:14:42.988 "num_base_bdevs_operational": 1, 00:14:42.988 "base_bdevs_list": [ 00:14:42.988 { 00:14:42.988 "name": null, 00:14:42.988 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.988 "is_configured": false, 00:14:42.988 "data_offset": 0, 00:14:42.988 "data_size": 63488 00:14:42.988 }, 00:14:42.988 { 00:14:42.988 "name": "BaseBdev2", 00:14:42.988 "uuid": "844ed52d-65ac-5db0-921c-a877dc713c3e", 00:14:42.988 "is_configured": true, 00:14:42.988 "data_offset": 2048, 00:14:42.988 "data_size": 63488 00:14:42.988 } 00:14:42.988 ] 00:14:42.988 }' 00:14:42.988 14:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:42.988 14:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.555 14:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:43.555 14:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:43.555 14:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:43.555 [2024-11-27 14:14:20.643310] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:43.555 [2024-11-27 14:14:20.643422] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:43.555 [2024-11-27 14:14:20.643469] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:14:43.555 [2024-11-27 14:14:20.643487] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:43.555 [2024-11-27 14:14:20.644114] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:43.555 [2024-11-27 14:14:20.644159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:43.555 [2024-11-27 14:14:20.644287] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:14:43.555 [2024-11-27 14:14:20.644312] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:14:43.555 [2024-11-27 14:14:20.644325] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:14:43.555 [2024-11-27 14:14:20.644364] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:43.555 [2024-11-27 14:14:20.660394] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:14:43.555 spare 00:14:43.555 14:14:20 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:43.555 14:14:20 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:14:43.555 [2024-11-27 14:14:20.663055] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:44.493 14:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:44.493 14:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:44.493 14:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:44.493 14:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:44.493 14:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:44.493 14:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.493 14:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.493 14:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.493 14:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.493 14:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.493 14:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:44.493 "name": "raid_bdev1", 00:14:44.493 "uuid": "daca40d7-a46a-4696-b76a-c6576ccb250b", 00:14:44.493 "strip_size_kb": 0, 00:14:44.493 "state": "online", 00:14:44.493 "raid_level": "raid1", 00:14:44.493 "superblock": true, 00:14:44.493 "num_base_bdevs": 2, 00:14:44.493 "num_base_bdevs_discovered": 2, 00:14:44.493 "num_base_bdevs_operational": 2, 00:14:44.493 "process": { 00:14:44.493 "type": "rebuild", 00:14:44.493 "target": "spare", 00:14:44.493 "progress": { 00:14:44.493 "blocks": 20480, 00:14:44.493 "percent": 32 00:14:44.493 } 00:14:44.493 }, 00:14:44.493 "base_bdevs_list": [ 00:14:44.493 { 00:14:44.493 "name": "spare", 00:14:44.493 "uuid": "d280eb63-7e52-50ee-833d-122ce475e902", 00:14:44.493 "is_configured": true, 00:14:44.493 "data_offset": 2048, 00:14:44.493 "data_size": 63488 00:14:44.493 }, 00:14:44.493 { 00:14:44.493 "name": "BaseBdev2", 00:14:44.493 "uuid": "844ed52d-65ac-5db0-921c-a877dc713c3e", 00:14:44.493 "is_configured": true, 00:14:44.493 "data_offset": 2048, 00:14:44.493 "data_size": 63488 00:14:44.493 } 00:14:44.493 ] 00:14:44.493 }' 00:14:44.493 14:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:44.752 14:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:44.752 14:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:44.752 14:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:44.752 14:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:14:44.753 14:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.753 14:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.753 [2024-11-27 14:14:21.832715] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:44.753 [2024-11-27 14:14:21.872659] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:44.753 [2024-11-27 14:14:21.872793] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:44.753 [2024-11-27 14:14:21.872855] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:44.753 [2024-11-27 14:14:21.872868] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:44.753 14:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.753 14:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:44.753 14:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:44.753 14:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:44.753 14:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:44.753 14:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:44.753 14:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:44.753 14:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:44.753 14:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:44.753 14:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:44.753 14:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:44.753 14:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:44.753 14:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.753 14:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:44.753 14:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:44.753 14:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.753 14:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:44.753 "name": "raid_bdev1", 00:14:44.753 "uuid": "daca40d7-a46a-4696-b76a-c6576ccb250b", 00:14:44.753 "strip_size_kb": 0, 00:14:44.753 "state": "online", 00:14:44.753 "raid_level": "raid1", 00:14:44.753 "superblock": true, 00:14:44.753 "num_base_bdevs": 2, 00:14:44.753 "num_base_bdevs_discovered": 1, 00:14:44.753 "num_base_bdevs_operational": 1, 00:14:44.753 "base_bdevs_list": [ 00:14:44.753 { 00:14:44.753 "name": null, 00:14:44.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:44.753 "is_configured": false, 00:14:44.753 "data_offset": 0, 00:14:44.753 "data_size": 63488 00:14:44.753 }, 00:14:44.753 { 00:14:44.753 "name": "BaseBdev2", 00:14:44.753 "uuid": "844ed52d-65ac-5db0-921c-a877dc713c3e", 00:14:44.753 "is_configured": true, 00:14:44.753 "data_offset": 2048, 00:14:44.753 "data_size": 63488 00:14:44.753 } 00:14:44.753 ] 00:14:44.753 }' 00:14:44.753 14:14:21 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:44.753 14:14:21 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.321 14:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:45.321 14:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:45.321 14:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:45.321 14:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:45.321 14:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:45.321 14:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:45.321 14:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:45.321 14:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.321 14:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.321 14:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.321 14:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:45.321 "name": "raid_bdev1", 00:14:45.321 "uuid": "daca40d7-a46a-4696-b76a-c6576ccb250b", 00:14:45.321 "strip_size_kb": 0, 00:14:45.321 "state": "online", 00:14:45.321 "raid_level": "raid1", 00:14:45.321 "superblock": true, 00:14:45.321 "num_base_bdevs": 2, 00:14:45.321 "num_base_bdevs_discovered": 1, 00:14:45.321 "num_base_bdevs_operational": 1, 00:14:45.321 "base_bdevs_list": [ 00:14:45.321 { 00:14:45.321 "name": null, 00:14:45.321 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:45.321 "is_configured": false, 00:14:45.321 "data_offset": 0, 00:14:45.321 "data_size": 63488 00:14:45.321 }, 00:14:45.321 { 00:14:45.321 "name": "BaseBdev2", 00:14:45.321 "uuid": "844ed52d-65ac-5db0-921c-a877dc713c3e", 00:14:45.321 "is_configured": true, 00:14:45.321 "data_offset": 2048, 00:14:45.321 "data_size": 63488 00:14:45.321 } 00:14:45.321 ] 00:14:45.321 }' 00:14:45.321 14:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:45.321 14:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:45.321 14:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:45.581 14:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:45.581 14:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:14:45.581 14:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.581 14:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.581 14:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.581 14:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:45.581 14:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.581 14:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:45.581 [2024-11-27 14:14:22.621869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:45.581 [2024-11-27 14:14:22.621968] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:45.581 [2024-11-27 14:14:22.622009] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:14:45.581 [2024-11-27 14:14:22.622049] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:45.581 [2024-11-27 14:14:22.622696] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:45.581 [2024-11-27 14:14:22.622736] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:45.581 [2024-11-27 14:14:22.622863] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:14:45.581 [2024-11-27 14:14:22.622885] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:45.581 [2024-11-27 14:14:22.622902] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:45.581 [2024-11-27 14:14:22.622916] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:14:45.581 BaseBdev1 00:14:45.581 14:14:22 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.581 14:14:22 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:14:46.517 14:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:46.517 14:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:46.517 14:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:46.517 14:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:46.517 14:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:46.517 14:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:46.517 14:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:46.517 14:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:46.517 14:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:46.517 14:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:46.517 14:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:46.517 14:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:46.517 14:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.517 14:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:46.517 14:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.517 14:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:46.517 "name": "raid_bdev1", 00:14:46.517 "uuid": "daca40d7-a46a-4696-b76a-c6576ccb250b", 00:14:46.517 "strip_size_kb": 0, 00:14:46.517 "state": "online", 00:14:46.517 "raid_level": "raid1", 00:14:46.517 "superblock": true, 00:14:46.517 "num_base_bdevs": 2, 00:14:46.517 "num_base_bdevs_discovered": 1, 00:14:46.517 "num_base_bdevs_operational": 1, 00:14:46.517 "base_bdevs_list": [ 00:14:46.517 { 00:14:46.517 "name": null, 00:14:46.517 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.517 "is_configured": false, 00:14:46.517 "data_offset": 0, 00:14:46.517 "data_size": 63488 00:14:46.517 }, 00:14:46.517 { 00:14:46.517 "name": "BaseBdev2", 00:14:46.517 "uuid": "844ed52d-65ac-5db0-921c-a877dc713c3e", 00:14:46.517 "is_configured": true, 00:14:46.517 "data_offset": 2048, 00:14:46.517 "data_size": 63488 00:14:46.517 } 00:14:46.517 ] 00:14:46.517 }' 00:14:46.517 14:14:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:46.517 14:14:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.143 14:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:47.143 14:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:47.143 14:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:47.143 14:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:47.143 14:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:47.143 14:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:47.143 14:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:47.143 14:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.143 14:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.143 14:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.143 14:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:47.143 "name": "raid_bdev1", 00:14:47.143 "uuid": "daca40d7-a46a-4696-b76a-c6576ccb250b", 00:14:47.143 "strip_size_kb": 0, 00:14:47.143 "state": "online", 00:14:47.143 "raid_level": "raid1", 00:14:47.143 "superblock": true, 00:14:47.143 "num_base_bdevs": 2, 00:14:47.143 "num_base_bdevs_discovered": 1, 00:14:47.143 "num_base_bdevs_operational": 1, 00:14:47.143 "base_bdevs_list": [ 00:14:47.143 { 00:14:47.143 "name": null, 00:14:47.143 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.143 "is_configured": false, 00:14:47.144 "data_offset": 0, 00:14:47.144 "data_size": 63488 00:14:47.144 }, 00:14:47.144 { 00:14:47.144 "name": "BaseBdev2", 00:14:47.144 "uuid": "844ed52d-65ac-5db0-921c-a877dc713c3e", 00:14:47.144 "is_configured": true, 00:14:47.144 "data_offset": 2048, 00:14:47.144 "data_size": 63488 00:14:47.144 } 00:14:47.144 ] 00:14:47.144 }' 00:14:47.144 14:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:47.144 14:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:47.144 14:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:47.144 14:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:47.144 14:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:47.144 14:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:14:47.144 14:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:47.144 14:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:14:47.144 14:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:47.144 14:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:14:47.144 14:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:47.144 14:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:14:47.144 14:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.144 14:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.144 [2024-11-27 14:14:24.286417] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:47.144 [2024-11-27 14:14:24.286671] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:14:47.144 [2024-11-27 14:14:24.286696] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:14:47.144 request: 00:14:47.144 { 00:14:47.144 "base_bdev": "BaseBdev1", 00:14:47.144 "raid_bdev": "raid_bdev1", 00:14:47.144 "method": "bdev_raid_add_base_bdev", 00:14:47.144 "req_id": 1 00:14:47.144 } 00:14:47.144 Got JSON-RPC error response 00:14:47.144 response: 00:14:47.144 { 00:14:47.144 "code": -22, 00:14:47.144 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:14:47.144 } 00:14:47.144 14:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:14:47.144 14:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:14:47.144 14:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:47.144 14:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:47.144 14:14:24 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:47.144 14:14:24 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:14:48.079 14:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:48.079 14:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:48.079 14:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:48.079 14:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:48.079 14:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:48.079 14:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:48.079 14:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:48.079 14:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:48.079 14:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:48.079 14:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:48.079 14:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.079 14:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.079 14:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.079 14:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.079 14:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.338 14:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:48.338 "name": "raid_bdev1", 00:14:48.338 "uuid": "daca40d7-a46a-4696-b76a-c6576ccb250b", 00:14:48.338 "strip_size_kb": 0, 00:14:48.338 "state": "online", 00:14:48.338 "raid_level": "raid1", 00:14:48.338 "superblock": true, 00:14:48.338 "num_base_bdevs": 2, 00:14:48.338 "num_base_bdevs_discovered": 1, 00:14:48.338 "num_base_bdevs_operational": 1, 00:14:48.338 "base_bdevs_list": [ 00:14:48.338 { 00:14:48.338 "name": null, 00:14:48.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.338 "is_configured": false, 00:14:48.338 "data_offset": 0, 00:14:48.338 "data_size": 63488 00:14:48.338 }, 00:14:48.338 { 00:14:48.338 "name": "BaseBdev2", 00:14:48.338 "uuid": "844ed52d-65ac-5db0-921c-a877dc713c3e", 00:14:48.338 "is_configured": true, 00:14:48.338 "data_offset": 2048, 00:14:48.338 "data_size": 63488 00:14:48.338 } 00:14:48.338 ] 00:14:48.338 }' 00:14:48.338 14:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:48.338 14:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.596 14:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:48.596 14:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:48.596 14:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:48.596 14:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:48.596 14:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:48.596 14:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:48.596 14:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:48.596 14:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.596 14:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:48.596 14:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.855 14:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:48.855 "name": "raid_bdev1", 00:14:48.855 "uuid": "daca40d7-a46a-4696-b76a-c6576ccb250b", 00:14:48.855 "strip_size_kb": 0, 00:14:48.855 "state": "online", 00:14:48.855 "raid_level": "raid1", 00:14:48.855 "superblock": true, 00:14:48.855 "num_base_bdevs": 2, 00:14:48.855 "num_base_bdevs_discovered": 1, 00:14:48.855 "num_base_bdevs_operational": 1, 00:14:48.855 "base_bdevs_list": [ 00:14:48.855 { 00:14:48.855 "name": null, 00:14:48.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:48.855 "is_configured": false, 00:14:48.855 "data_offset": 0, 00:14:48.855 "data_size": 63488 00:14:48.855 }, 00:14:48.855 { 00:14:48.855 "name": "BaseBdev2", 00:14:48.855 "uuid": "844ed52d-65ac-5db0-921c-a877dc713c3e", 00:14:48.855 "is_configured": true, 00:14:48.855 "data_offset": 2048, 00:14:48.855 "data_size": 63488 00:14:48.855 } 00:14:48.855 ] 00:14:48.855 }' 00:14:48.855 14:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:48.855 14:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:48.855 14:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:48.855 14:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:48.855 14:14:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 75836 00:14:48.855 14:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 75836 ']' 00:14:48.855 14:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 75836 00:14:48.855 14:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:14:48.855 14:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:48.855 14:14:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75836 00:14:48.855 14:14:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:48.855 killing process with pid 75836 00:14:48.855 14:14:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:48.855 14:14:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75836' 00:14:48.855 14:14:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 75836 00:14:48.855 Received shutdown signal, test time was about 60.000000 seconds 00:14:48.855 00:14:48.855 Latency(us) 00:14:48.855 [2024-11-27T14:14:26.133Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:48.855 [2024-11-27T14:14:26.133Z] =================================================================================================================== 00:14:48.855 [2024-11-27T14:14:26.133Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:14:48.855 [2024-11-27 14:14:26.019521] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:48.855 14:14:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 75836 00:14:48.855 [2024-11-27 14:14:26.019673] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:48.855 [2024-11-27 14:14:26.019742] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:48.855 [2024-11-27 14:14:26.019762] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:14:49.113 [2024-11-27 14:14:26.297231] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:14:50.502 00:14:50.502 real 0m27.334s 00:14:50.502 user 0m33.727s 00:14:50.502 sys 0m4.249s 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:50.502 ************************************ 00:14:50.502 END TEST raid_rebuild_test_sb 00:14:50.502 ************************************ 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.502 14:14:27 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 2 false true true 00:14:50.502 14:14:27 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:14:50.502 14:14:27 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:50.502 14:14:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:50.502 ************************************ 00:14:50.502 START TEST raid_rebuild_test_io 00:14:50.502 ************************************ 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 false true true 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76600 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76600 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 76600 ']' 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:50.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:50.502 14:14:27 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:50.502 [2024-11-27 14:14:27.527832] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:14:50.502 [2024-11-27 14:14:27.528010] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:14:50.502 Zero copy mechanism will not be used. 00:14:50.502 -allocations --file-prefix=spdk_pid76600 ] 00:14:50.502 [2024-11-27 14:14:27.716521] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.761 [2024-11-27 14:14:27.872493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:51.019 [2024-11-27 14:14:28.081208] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:51.019 [2024-11-27 14:14:28.081277] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:51.277 14:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:51.277 14:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:14:51.277 14:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:51.277 14:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:51.277 14:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.277 14:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.277 BaseBdev1_malloc 00:14:51.277 14:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.277 14:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:14:51.277 14:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.277 14:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.277 [2024-11-27 14:14:28.543724] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:14:51.277 [2024-11-27 14:14:28.543842] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.277 [2024-11-27 14:14:28.543875] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:14:51.277 [2024-11-27 14:14:28.543894] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.277 [2024-11-27 14:14:28.546681] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.277 [2024-11-27 14:14:28.546735] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:51.277 BaseBdev1 00:14:51.277 14:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.277 14:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:14:51.277 14:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:51.277 14:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.277 14:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.536 BaseBdev2_malloc 00:14:51.536 14:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.536 14:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:14:51.536 14:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.536 14:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.536 [2024-11-27 14:14:28.601315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:14:51.536 [2024-11-27 14:14:28.601396] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.536 [2024-11-27 14:14:28.601430] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:14:51.536 [2024-11-27 14:14:28.601448] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.536 [2024-11-27 14:14:28.604209] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.536 [2024-11-27 14:14:28.604259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:51.536 BaseBdev2 00:14:51.536 14:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.536 14:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:14:51.536 14:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.536 14:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.536 spare_malloc 00:14:51.536 14:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.536 14:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:14:51.536 14:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.536 14:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.536 spare_delay 00:14:51.536 14:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.536 14:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:14:51.536 14:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.536 14:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.536 [2024-11-27 14:14:28.681635] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:14:51.536 [2024-11-27 14:14:28.681717] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:51.536 [2024-11-27 14:14:28.681750] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:14:51.536 [2024-11-27 14:14:28.681786] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:51.536 [2024-11-27 14:14:28.684685] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:51.536 [2024-11-27 14:14:28.684740] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:14:51.536 spare 00:14:51.536 14:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.536 14:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:14:51.536 14:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.536 14:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.536 [2024-11-27 14:14:28.689726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:51.536 [2024-11-27 14:14:28.692136] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:51.536 [2024-11-27 14:14:28.692269] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:14:51.536 [2024-11-27 14:14:28.692292] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:14:51.536 [2024-11-27 14:14:28.692613] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:14:51.536 [2024-11-27 14:14:28.692856] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:14:51.536 [2024-11-27 14:14:28.692887] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:14:51.536 [2024-11-27 14:14:28.693086] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:51.536 14:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.536 14:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:51.536 14:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:51.536 14:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:51.536 14:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:51.536 14:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:51.536 14:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:51.536 14:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:51.536 14:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:51.536 14:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:51.536 14:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:51.536 14:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:51.537 14:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:51.537 14:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.537 14:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:51.537 14:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.537 14:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:51.537 "name": "raid_bdev1", 00:14:51.537 "uuid": "6df5adbc-8dc0-4e36-82d5-feb28bea9093", 00:14:51.537 "strip_size_kb": 0, 00:14:51.537 "state": "online", 00:14:51.537 "raid_level": "raid1", 00:14:51.537 "superblock": false, 00:14:51.537 "num_base_bdevs": 2, 00:14:51.537 "num_base_bdevs_discovered": 2, 00:14:51.537 "num_base_bdevs_operational": 2, 00:14:51.537 "base_bdevs_list": [ 00:14:51.537 { 00:14:51.537 "name": "BaseBdev1", 00:14:51.537 "uuid": "1dc927d6-e51d-5e28-a7ff-3569b0ea88de", 00:14:51.537 "is_configured": true, 00:14:51.537 "data_offset": 0, 00:14:51.537 "data_size": 65536 00:14:51.537 }, 00:14:51.537 { 00:14:51.537 "name": "BaseBdev2", 00:14:51.537 "uuid": "92c46fd9-e243-5edb-8566-e04076b21936", 00:14:51.537 "is_configured": true, 00:14:51.537 "data_offset": 0, 00:14:51.537 "data_size": 65536 00:14:51.537 } 00:14:51.537 ] 00:14:51.537 }' 00:14:51.537 14:14:28 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:51.537 14:14:28 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.122 14:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:14:52.122 14:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.122 14:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.122 14:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:14:52.122 [2024-11-27 14:14:29.202269] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:52.122 14:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.122 14:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:14:52.122 14:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.122 14:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.122 14:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.122 14:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:14:52.122 14:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.122 14:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:14:52.122 14:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:14:52.122 14:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:14:52.122 14:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:14:52.122 14:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.122 14:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.122 [2024-11-27 14:14:29.309962] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:52.122 14:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.122 14:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:52.122 14:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:52.122 14:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:52.122 14:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:52.122 14:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:52.122 14:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:52.122 14:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:52.122 14:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:52.123 14:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:52.123 14:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:52.123 14:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:52.123 14:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:52.123 14:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.123 14:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.123 14:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.123 14:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:52.123 "name": "raid_bdev1", 00:14:52.123 "uuid": "6df5adbc-8dc0-4e36-82d5-feb28bea9093", 00:14:52.123 "strip_size_kb": 0, 00:14:52.123 "state": "online", 00:14:52.123 "raid_level": "raid1", 00:14:52.123 "superblock": false, 00:14:52.123 "num_base_bdevs": 2, 00:14:52.123 "num_base_bdevs_discovered": 1, 00:14:52.123 "num_base_bdevs_operational": 1, 00:14:52.123 "base_bdevs_list": [ 00:14:52.123 { 00:14:52.123 "name": null, 00:14:52.123 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:52.123 "is_configured": false, 00:14:52.123 "data_offset": 0, 00:14:52.123 "data_size": 65536 00:14:52.123 }, 00:14:52.123 { 00:14:52.123 "name": "BaseBdev2", 00:14:52.123 "uuid": "92c46fd9-e243-5edb-8566-e04076b21936", 00:14:52.123 "is_configured": true, 00:14:52.123 "data_offset": 0, 00:14:52.123 "data_size": 65536 00:14:52.123 } 00:14:52.123 ] 00:14:52.123 }' 00:14:52.123 14:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:52.123 14:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.380 [2024-11-27 14:14:29.438073] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:14:52.380 I/O size of 3145728 is greater than zero copy threshold (65536). 00:14:52.380 Zero copy mechanism will not be used. 00:14:52.380 Running I/O for 60 seconds... 00:14:52.638 14:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:52.638 14:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.638 14:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:52.638 [2024-11-27 14:14:29.834977] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:52.638 14:14:29 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.638 14:14:29 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:14:52.897 [2024-11-27 14:14:29.914915] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:14:52.897 [2024-11-27 14:14:29.917421] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:52.897 [2024-11-27 14:14:30.060762] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:53.155 [2024-11-27 14:14:30.204749] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:53.155 [2024-11-27 14:14:30.205177] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:53.413 175.00 IOPS, 525.00 MiB/s [2024-11-27T14:14:30.691Z] [2024-11-27 14:14:30.537049] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:53.413 [2024-11-27 14:14:30.537723] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:53.671 [2024-11-27 14:14:30.767179] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:53.671 14:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:53.671 14:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:53.671 14:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:53.671 14:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:53.671 14:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:53.671 14:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:53.671 14:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:53.671 14:14:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.671 14:14:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.671 14:14:30 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.671 14:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:53.671 "name": "raid_bdev1", 00:14:53.671 "uuid": "6df5adbc-8dc0-4e36-82d5-feb28bea9093", 00:14:53.671 "strip_size_kb": 0, 00:14:53.671 "state": "online", 00:14:53.671 "raid_level": "raid1", 00:14:53.671 "superblock": false, 00:14:53.671 "num_base_bdevs": 2, 00:14:53.671 "num_base_bdevs_discovered": 2, 00:14:53.671 "num_base_bdevs_operational": 2, 00:14:53.671 "process": { 00:14:53.671 "type": "rebuild", 00:14:53.671 "target": "spare", 00:14:53.671 "progress": { 00:14:53.671 "blocks": 12288, 00:14:53.671 "percent": 18 00:14:53.671 } 00:14:53.671 }, 00:14:53.671 "base_bdevs_list": [ 00:14:53.671 { 00:14:53.671 "name": "spare", 00:14:53.671 "uuid": "3fa3f720-b716-5685-b629-ed75e19e30aa", 00:14:53.671 "is_configured": true, 00:14:53.671 "data_offset": 0, 00:14:53.671 "data_size": 65536 00:14:53.671 }, 00:14:53.671 { 00:14:53.671 "name": "BaseBdev2", 00:14:53.671 "uuid": "92c46fd9-e243-5edb-8566-e04076b21936", 00:14:53.671 "is_configured": true, 00:14:53.671 "data_offset": 0, 00:14:53.671 "data_size": 65536 00:14:53.671 } 00:14:53.671 ] 00:14:53.671 }' 00:14:53.671 14:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:53.929 14:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:53.929 [2024-11-27 14:14:30.988748] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:53.929 14:14:30 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:53.929 14:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:53.929 14:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:14:53.929 14:14:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.929 14:14:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:53.929 [2024-11-27 14:14:31.039571] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:53.929 [2024-11-27 14:14:31.179302] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:14:53.929 [2024-11-27 14:14:31.197631] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.929 [2024-11-27 14:14:31.197697] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:14:53.930 [2024-11-27 14:14:31.197717] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:14:54.189 [2024-11-27 14:14:31.240995] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:14:54.189 14:14:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.189 14:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:14:54.189 14:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:54.189 14:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:54.189 14:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:54.189 14:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:54.189 14:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:14:54.189 14:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:54.189 14:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:54.189 14:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:54.189 14:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:54.189 14:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.189 14:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.189 14:14:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.189 14:14:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.189 14:14:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.189 14:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:54.189 "name": "raid_bdev1", 00:14:54.189 "uuid": "6df5adbc-8dc0-4e36-82d5-feb28bea9093", 00:14:54.189 "strip_size_kb": 0, 00:14:54.189 "state": "online", 00:14:54.189 "raid_level": "raid1", 00:14:54.189 "superblock": false, 00:14:54.189 "num_base_bdevs": 2, 00:14:54.189 "num_base_bdevs_discovered": 1, 00:14:54.189 "num_base_bdevs_operational": 1, 00:14:54.189 "base_bdevs_list": [ 00:14:54.189 { 00:14:54.189 "name": null, 00:14:54.189 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.189 "is_configured": false, 00:14:54.189 "data_offset": 0, 00:14:54.189 "data_size": 65536 00:14:54.189 }, 00:14:54.189 { 00:14:54.189 "name": "BaseBdev2", 00:14:54.189 "uuid": "92c46fd9-e243-5edb-8566-e04076b21936", 00:14:54.189 "is_configured": true, 00:14:54.189 "data_offset": 0, 00:14:54.189 "data_size": 65536 00:14:54.189 } 00:14:54.189 ] 00:14:54.189 }' 00:14:54.189 14:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:54.189 14:14:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.756 140.50 IOPS, 421.50 MiB/s [2024-11-27T14:14:32.034Z] 14:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:54.756 14:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:54.756 14:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:54.756 14:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:54.756 14:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:54.757 14:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:54.757 14:14:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.757 14:14:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.757 14:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:54.757 14:14:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.757 14:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:54.757 "name": "raid_bdev1", 00:14:54.757 "uuid": "6df5adbc-8dc0-4e36-82d5-feb28bea9093", 00:14:54.757 "strip_size_kb": 0, 00:14:54.757 "state": "online", 00:14:54.757 "raid_level": "raid1", 00:14:54.757 "superblock": false, 00:14:54.757 "num_base_bdevs": 2, 00:14:54.757 "num_base_bdevs_discovered": 1, 00:14:54.757 "num_base_bdevs_operational": 1, 00:14:54.757 "base_bdevs_list": [ 00:14:54.757 { 00:14:54.757 "name": null, 00:14:54.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:54.757 "is_configured": false, 00:14:54.757 "data_offset": 0, 00:14:54.757 "data_size": 65536 00:14:54.757 }, 00:14:54.757 { 00:14:54.757 "name": "BaseBdev2", 00:14:54.757 "uuid": "92c46fd9-e243-5edb-8566-e04076b21936", 00:14:54.757 "is_configured": true, 00:14:54.757 "data_offset": 0, 00:14:54.757 "data_size": 65536 00:14:54.757 } 00:14:54.757 ] 00:14:54.757 }' 00:14:54.757 14:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:54.757 14:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:54.757 14:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:54.757 14:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:54.757 14:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:14:54.757 14:14:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.757 14:14:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:54.757 [2024-11-27 14:14:31.940885] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:14:54.757 14:14:31 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.757 14:14:31 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:14:54.757 [2024-11-27 14:14:31.987594] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:14:54.757 [2024-11-27 14:14:31.990127] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:14:55.016 [2024-11-27 14:14:32.146088] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:14:55.274 [2024-11-27 14:14:32.374312] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:55.274 [2024-11-27 14:14:32.374745] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:14:55.532 153.33 IOPS, 460.00 MiB/s [2024-11-27T14:14:32.810Z] [2024-11-27 14:14:32.724453] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:55.532 [2024-11-27 14:14:32.725130] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:14:55.791 [2024-11-27 14:14:32.945031] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:55.791 [2024-11-27 14:14:32.945471] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:14:55.791 14:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:55.791 14:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:55.791 14:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:55.791 14:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:55.791 14:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:55.791 14:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:55.791 14:14:32 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:55.791 14:14:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.791 14:14:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:55.791 14:14:32 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.791 14:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:55.791 "name": "raid_bdev1", 00:14:55.791 "uuid": "6df5adbc-8dc0-4e36-82d5-feb28bea9093", 00:14:55.791 "strip_size_kb": 0, 00:14:55.791 "state": "online", 00:14:55.791 "raid_level": "raid1", 00:14:55.791 "superblock": false, 00:14:55.791 "num_base_bdevs": 2, 00:14:55.791 "num_base_bdevs_discovered": 2, 00:14:55.791 "num_base_bdevs_operational": 2, 00:14:55.791 "process": { 00:14:55.791 "type": "rebuild", 00:14:55.791 "target": "spare", 00:14:55.792 "progress": { 00:14:55.792 "blocks": 10240, 00:14:55.792 "percent": 15 00:14:55.792 } 00:14:55.792 }, 00:14:55.792 "base_bdevs_list": [ 00:14:55.792 { 00:14:55.792 "name": "spare", 00:14:55.792 "uuid": "3fa3f720-b716-5685-b629-ed75e19e30aa", 00:14:55.792 "is_configured": true, 00:14:55.792 "data_offset": 0, 00:14:55.792 "data_size": 65536 00:14:55.792 }, 00:14:55.792 { 00:14:55.792 "name": "BaseBdev2", 00:14:55.792 "uuid": "92c46fd9-e243-5edb-8566-e04076b21936", 00:14:55.792 "is_configured": true, 00:14:55.792 "data_offset": 0, 00:14:55.792 "data_size": 65536 00:14:55.792 } 00:14:55.792 ] 00:14:55.792 }' 00:14:55.792 14:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.051 14:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:56.051 14:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.051 14:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:56.051 14:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:14:56.051 14:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:14:56.051 14:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:14:56.051 14:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:14:56.051 14:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=440 00:14:56.051 14:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:56.051 14:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:56.051 14:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:56.051 14:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:56.051 14:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:56.051 14:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:56.051 14:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:56.051 14:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:56.051 14:14:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.051 14:14:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:56.051 14:14:33 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.051 14:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:56.051 "name": "raid_bdev1", 00:14:56.051 "uuid": "6df5adbc-8dc0-4e36-82d5-feb28bea9093", 00:14:56.051 "strip_size_kb": 0, 00:14:56.051 "state": "online", 00:14:56.051 "raid_level": "raid1", 00:14:56.051 "superblock": false, 00:14:56.051 "num_base_bdevs": 2, 00:14:56.051 "num_base_bdevs_discovered": 2, 00:14:56.051 "num_base_bdevs_operational": 2, 00:14:56.051 "process": { 00:14:56.051 "type": "rebuild", 00:14:56.051 "target": "spare", 00:14:56.051 "progress": { 00:14:56.051 "blocks": 10240, 00:14:56.051 "percent": 15 00:14:56.051 } 00:14:56.051 }, 00:14:56.051 "base_bdevs_list": [ 00:14:56.051 { 00:14:56.051 "name": "spare", 00:14:56.051 "uuid": "3fa3f720-b716-5685-b629-ed75e19e30aa", 00:14:56.051 "is_configured": true, 00:14:56.051 "data_offset": 0, 00:14:56.051 "data_size": 65536 00:14:56.051 }, 00:14:56.051 { 00:14:56.051 "name": "BaseBdev2", 00:14:56.051 "uuid": "92c46fd9-e243-5edb-8566-e04076b21936", 00:14:56.051 "is_configured": true, 00:14:56.051 "data_offset": 0, 00:14:56.051 "data_size": 65536 00:14:56.051 } 00:14:56.051 ] 00:14:56.051 }' 00:14:56.051 14:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:56.051 14:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:56.051 14:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:56.051 14:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:56.051 14:14:33 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:56.051 [2024-11-27 14:14:33.288164] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:14:56.567 129.25 IOPS, 387.75 MiB/s [2024-11-27T14:14:33.845Z] [2024-11-27 14:14:33.645849] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:14:56.825 [2024-11-27 14:14:33.891388] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:56.825 [2024-11-27 14:14:33.891816] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:14:57.083 [2024-11-27 14:14:34.225058] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:14:57.083 14:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:57.083 14:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:57.083 14:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:57.083 14:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:57.083 14:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:57.083 14:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:57.083 14:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:57.083 14:14:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.083 14:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:57.083 14:14:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:57.083 14:14:34 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.083 14:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:57.083 "name": "raid_bdev1", 00:14:57.083 "uuid": "6df5adbc-8dc0-4e36-82d5-feb28bea9093", 00:14:57.083 "strip_size_kb": 0, 00:14:57.083 "state": "online", 00:14:57.083 "raid_level": "raid1", 00:14:57.083 "superblock": false, 00:14:57.083 "num_base_bdevs": 2, 00:14:57.083 "num_base_bdevs_discovered": 2, 00:14:57.083 "num_base_bdevs_operational": 2, 00:14:57.083 "process": { 00:14:57.083 "type": "rebuild", 00:14:57.083 "target": "spare", 00:14:57.083 "progress": { 00:14:57.083 "blocks": 26624, 00:14:57.083 "percent": 40 00:14:57.083 } 00:14:57.083 }, 00:14:57.083 "base_bdevs_list": [ 00:14:57.083 { 00:14:57.083 "name": "spare", 00:14:57.083 "uuid": "3fa3f720-b716-5685-b629-ed75e19e30aa", 00:14:57.083 "is_configured": true, 00:14:57.083 "data_offset": 0, 00:14:57.083 "data_size": 65536 00:14:57.083 }, 00:14:57.083 { 00:14:57.083 "name": "BaseBdev2", 00:14:57.083 "uuid": "92c46fd9-e243-5edb-8566-e04076b21936", 00:14:57.083 "is_configured": true, 00:14:57.083 "data_offset": 0, 00:14:57.083 "data_size": 65536 00:14:57.083 } 00:14:57.083 ] 00:14:57.083 }' 00:14:57.083 14:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:57.341 14:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:57.341 14:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:57.341 14:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:57.341 14:14:34 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:58.276 119.00 IOPS, 357.00 MiB/s [2024-11-27T14:14:35.554Z] 106.50 IOPS, 319.50 MiB/s [2024-11-27T14:14:35.554Z] 14:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:58.276 14:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:58.276 14:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:58.276 14:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:58.276 14:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:58.276 14:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:58.276 14:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:58.276 14:14:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.276 14:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:58.276 14:14:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:58.276 14:14:35 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.276 14:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:58.276 "name": "raid_bdev1", 00:14:58.276 "uuid": "6df5adbc-8dc0-4e36-82d5-feb28bea9093", 00:14:58.276 "strip_size_kb": 0, 00:14:58.276 "state": "online", 00:14:58.276 "raid_level": "raid1", 00:14:58.276 "superblock": false, 00:14:58.276 "num_base_bdevs": 2, 00:14:58.276 "num_base_bdevs_discovered": 2, 00:14:58.276 "num_base_bdevs_operational": 2, 00:14:58.276 "process": { 00:14:58.276 "type": "rebuild", 00:14:58.276 "target": "spare", 00:14:58.276 "progress": { 00:14:58.276 "blocks": 47104, 00:14:58.276 "percent": 71 00:14:58.276 } 00:14:58.276 }, 00:14:58.276 "base_bdevs_list": [ 00:14:58.276 { 00:14:58.276 "name": "spare", 00:14:58.276 "uuid": "3fa3f720-b716-5685-b629-ed75e19e30aa", 00:14:58.276 "is_configured": true, 00:14:58.276 "data_offset": 0, 00:14:58.276 "data_size": 65536 00:14:58.276 }, 00:14:58.276 { 00:14:58.276 "name": "BaseBdev2", 00:14:58.276 "uuid": "92c46fd9-e243-5edb-8566-e04076b21936", 00:14:58.276 "is_configured": true, 00:14:58.276 "data_offset": 0, 00:14:58.276 "data_size": 65536 00:14:58.276 } 00:14:58.276 ] 00:14:58.276 }' 00:14:58.276 14:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:58.534 14:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:14:58.534 14:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:58.534 14:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:14:58.534 14:14:35 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:14:58.793 [2024-11-27 14:14:36.045096] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:14:59.361 97.14 IOPS, 291.43 MiB/s [2024-11-27T14:14:36.639Z] [2024-11-27 14:14:36.482997] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:14:59.361 [2024-11-27 14:14:36.590790] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:14:59.361 [2024-11-27 14:14:36.592974] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:59.361 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:14:59.361 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:14:59.361 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.361 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:14:59.361 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:14:59.361 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.361 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.361 14:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.361 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.361 14:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.619 14:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.619 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.619 "name": "raid_bdev1", 00:14:59.619 "uuid": "6df5adbc-8dc0-4e36-82d5-feb28bea9093", 00:14:59.619 "strip_size_kb": 0, 00:14:59.619 "state": "online", 00:14:59.619 "raid_level": "raid1", 00:14:59.619 "superblock": false, 00:14:59.619 "num_base_bdevs": 2, 00:14:59.619 "num_base_bdevs_discovered": 2, 00:14:59.620 "num_base_bdevs_operational": 2, 00:14:59.620 "base_bdevs_list": [ 00:14:59.620 { 00:14:59.620 "name": "spare", 00:14:59.620 "uuid": "3fa3f720-b716-5685-b629-ed75e19e30aa", 00:14:59.620 "is_configured": true, 00:14:59.620 "data_offset": 0, 00:14:59.620 "data_size": 65536 00:14:59.620 }, 00:14:59.620 { 00:14:59.620 "name": "BaseBdev2", 00:14:59.620 "uuid": "92c46fd9-e243-5edb-8566-e04076b21936", 00:14:59.620 "is_configured": true, 00:14:59.620 "data_offset": 0, 00:14:59.620 "data_size": 65536 00:14:59.620 } 00:14:59.620 ] 00:14:59.620 }' 00:14:59.620 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.620 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:14:59.620 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.620 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:14:59.620 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:14:59.620 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:14:59.620 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:14:59.620 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:14:59.620 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:14:59.620 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:14:59.620 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.620 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.620 14:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.620 14:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.620 14:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.620 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:14:59.620 "name": "raid_bdev1", 00:14:59.620 "uuid": "6df5adbc-8dc0-4e36-82d5-feb28bea9093", 00:14:59.620 "strip_size_kb": 0, 00:14:59.620 "state": "online", 00:14:59.620 "raid_level": "raid1", 00:14:59.620 "superblock": false, 00:14:59.620 "num_base_bdevs": 2, 00:14:59.620 "num_base_bdevs_discovered": 2, 00:14:59.620 "num_base_bdevs_operational": 2, 00:14:59.620 "base_bdevs_list": [ 00:14:59.620 { 00:14:59.620 "name": "spare", 00:14:59.620 "uuid": "3fa3f720-b716-5685-b629-ed75e19e30aa", 00:14:59.620 "is_configured": true, 00:14:59.620 "data_offset": 0, 00:14:59.620 "data_size": 65536 00:14:59.620 }, 00:14:59.620 { 00:14:59.620 "name": "BaseBdev2", 00:14:59.620 "uuid": "92c46fd9-e243-5edb-8566-e04076b21936", 00:14:59.620 "is_configured": true, 00:14:59.620 "data_offset": 0, 00:14:59.620 "data_size": 65536 00:14:59.620 } 00:14:59.620 ] 00:14:59.620 }' 00:14:59.620 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:14:59.620 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:14:59.620 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:14:59.878 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:14:59.878 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:14:59.878 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:14:59.878 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:14:59.878 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:14:59.878 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:14:59.878 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:14:59.879 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:14:59.879 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:14:59.879 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:14:59.879 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:14:59.879 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:14:59.879 14:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.879 14:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:14:59.879 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:59.879 14:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.879 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:14:59.879 "name": "raid_bdev1", 00:14:59.879 "uuid": "6df5adbc-8dc0-4e36-82d5-feb28bea9093", 00:14:59.879 "strip_size_kb": 0, 00:14:59.879 "state": "online", 00:14:59.879 "raid_level": "raid1", 00:14:59.879 "superblock": false, 00:14:59.879 "num_base_bdevs": 2, 00:14:59.879 "num_base_bdevs_discovered": 2, 00:14:59.879 "num_base_bdevs_operational": 2, 00:14:59.879 "base_bdevs_list": [ 00:14:59.879 { 00:14:59.879 "name": "spare", 00:14:59.879 "uuid": "3fa3f720-b716-5685-b629-ed75e19e30aa", 00:14:59.879 "is_configured": true, 00:14:59.879 "data_offset": 0, 00:14:59.879 "data_size": 65536 00:14:59.879 }, 00:14:59.879 { 00:14:59.879 "name": "BaseBdev2", 00:14:59.879 "uuid": "92c46fd9-e243-5edb-8566-e04076b21936", 00:14:59.879 "is_configured": true, 00:14:59.879 "data_offset": 0, 00:14:59.879 "data_size": 65536 00:14:59.879 } 00:14:59.879 ] 00:14:59.879 }' 00:14:59.879 14:14:36 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:14:59.879 14:14:36 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.448 88.50 IOPS, 265.50 MiB/s [2024-11-27T14:14:37.726Z] 14:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:00.448 14:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.448 14:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.448 [2024-11-27 14:14:37.458858] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:00.448 [2024-11-27 14:14:37.458896] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:00.448 00:15:00.448 Latency(us) 00:15:00.448 [2024-11-27T14:14:37.726Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:00.448 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:00.448 raid_bdev1 : 8.07 87.94 263.81 0.00 0.00 15188.12 297.89 112006.98 00:15:00.448 [2024-11-27T14:14:37.726Z] =================================================================================================================== 00:15:00.448 [2024-11-27T14:14:37.726Z] Total : 87.94 263.81 0.00 0.00 15188.12 297.89 112006.98 00:15:00.448 [2024-11-27 14:14:37.535341] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:00.448 [2024-11-27 14:14:37.535436] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.448 [2024-11-27 14:14:37.535546] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:00.448 [2024-11-27 14:14:37.535566] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:00.448 { 00:15:00.448 "results": [ 00:15:00.448 { 00:15:00.448 "job": "raid_bdev1", 00:15:00.448 "core_mask": "0x1", 00:15:00.448 "workload": "randrw", 00:15:00.448 "percentage": 50, 00:15:00.448 "status": "finished", 00:15:00.448 "queue_depth": 2, 00:15:00.448 "io_size": 3145728, 00:15:00.448 "runtime": 8.073999, 00:15:00.448 "iops": 87.93659746551863, 00:15:00.448 "mibps": 263.8097923965559, 00:15:00.448 "io_failed": 0, 00:15:00.448 "io_timeout": 0, 00:15:00.448 "avg_latency_us": 15188.124025608195, 00:15:00.448 "min_latency_us": 297.8909090909091, 00:15:00.448 "max_latency_us": 112006.98181818181 00:15:00.448 } 00:15:00.448 ], 00:15:00.448 "core_count": 1 00:15:00.448 } 00:15:00.448 14:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.448 14:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:00.448 14:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:00.448 14:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.448 14:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:00.448 14:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.448 14:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:00.448 14:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:00.448 14:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:00.448 14:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:00.448 14:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:00.448 14:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:00.448 14:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:00.448 14:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:00.448 14:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:00.448 14:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:00.448 14:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:00.448 14:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:00.448 14:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:00.706 /dev/nbd0 00:15:00.706 14:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:00.707 14:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:00.707 14:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:00.707 14:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:00.707 14:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:00.707 14:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:00.707 14:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:00.707 14:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:00.707 14:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:00.707 14:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:00.707 14:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:00.707 1+0 records in 00:15:00.707 1+0 records out 00:15:00.707 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000292539 s, 14.0 MB/s 00:15:00.707 14:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:00.707 14:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:00.707 14:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:00.707 14:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:00.707 14:14:37 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:00.707 14:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:00.707 14:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:00.707 14:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:00.707 14:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:15:00.707 14:14:37 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:15:00.707 14:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:00.707 14:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:15:00.707 14:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:00.707 14:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:00.707 14:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:00.707 14:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:15:00.707 14:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:00.707 14:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:00.707 14:14:37 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:15:00.965 /dev/nbd1 00:15:00.965 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:00.965 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:00.965 14:14:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:00.965 14:14:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:15:00.965 14:14:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:00.965 14:14:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:00.965 14:14:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:00.965 14:14:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:15:00.965 14:14:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:00.965 14:14:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:00.965 14:14:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:00.965 1+0 records in 00:15:00.965 1+0 records out 00:15:00.965 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000217924 s, 18.8 MB/s 00:15:00.965 14:14:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:00.965 14:14:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:15:00.965 14:14:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:00.965 14:14:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:00.965 14:14:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:15:00.965 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:00.965 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:00.965 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:01.224 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:01.224 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:01.224 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:01.224 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:01.224 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:01.224 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:01.224 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:01.483 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:01.483 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:01.483 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:01.483 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:01.483 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:01.483 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:01.483 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:01.483 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:01.483 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:01.483 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:01.483 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:01.483 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:01.483 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:15:01.483 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:01.483 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:01.742 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:01.742 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:01.742 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:01.742 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:01.742 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:01.742 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:01.742 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:15:01.742 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:01.742 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:01.742 14:14:38 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 76600 00:15:01.742 14:14:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 76600 ']' 00:15:01.742 14:14:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 76600 00:15:01.742 14:14:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:15:01.742 14:14:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:01.742 14:14:38 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76600 00:15:01.742 killing process with pid 76600 00:15:01.742 Received shutdown signal, test time was about 9.562911 seconds 00:15:01.742 00:15:01.742 Latency(us) 00:15:01.742 [2024-11-27T14:14:39.020Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.742 [2024-11-27T14:14:39.020Z] =================================================================================================================== 00:15:01.742 [2024-11-27T14:14:39.020Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:01.742 14:14:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:01.742 14:14:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:01.742 14:14:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76600' 00:15:01.742 14:14:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 76600 00:15:01.742 [2024-11-27 14:14:39.003706] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:01.742 14:14:39 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 76600 00:15:02.002 [2024-11-27 14:14:39.210384] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:03.380 ************************************ 00:15:03.380 END TEST raid_rebuild_test_io 00:15:03.380 ************************************ 00:15:03.380 14:14:40 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:03.380 00:15:03.380 real 0m12.891s 00:15:03.380 user 0m16.842s 00:15:03.380 sys 0m1.422s 00:15:03.380 14:14:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:03.380 14:14:40 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.380 14:14:40 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 2 true true true 00:15:03.380 14:14:40 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:03.380 14:14:40 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:03.380 14:14:40 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:03.380 ************************************ 00:15:03.380 START TEST raid_rebuild_test_sb_io 00:15:03.380 ************************************ 00:15:03.380 14:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true true true 00:15:03.380 14:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:03.380 14:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:15:03.380 14:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:03.380 14:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:15:03.380 14:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:03.380 14:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:03.380 14:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:03.380 14:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:03.380 14:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:03.380 14:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:03.380 14:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:03.380 14:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:03.380 14:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:03.380 14:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:15:03.380 14:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:03.380 14:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:03.380 14:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:03.380 14:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:03.380 14:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:03.380 14:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:03.380 14:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:03.380 14:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:03.380 14:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:03.380 14:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:03.381 14:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=76986 00:15:03.381 14:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 76986 00:15:03.381 14:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 76986 ']' 00:15:03.381 14:14:40 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:03.381 14:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.381 14:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:03.381 14:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.381 14:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:03.381 14:14:40 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:03.381 [2024-11-27 14:14:40.479807] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:15:03.381 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:03.381 Zero copy mechanism will not be used. 00:15:03.381 [2024-11-27 14:14:40.480008] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76986 ] 00:15:03.640 [2024-11-27 14:14:40.664462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:03.640 [2024-11-27 14:14:40.802595] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.898 [2024-11-27 14:14:41.002916] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:03.898 [2024-11-27 14:14:41.003014] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:04.467 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:04.467 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:15:04.467 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:04.467 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:04.467 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.467 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.467 BaseBdev1_malloc 00:15:04.467 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.467 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:04.467 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.467 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.467 [2024-11-27 14:14:41.485272] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:04.467 [2024-11-27 14:14:41.485348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.467 [2024-11-27 14:14:41.485381] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:04.467 [2024-11-27 14:14:41.485400] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.467 [2024-11-27 14:14:41.488207] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.467 [2024-11-27 14:14:41.488258] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:04.467 BaseBdev1 00:15:04.467 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.467 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:04.467 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:04.467 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.467 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.467 BaseBdev2_malloc 00:15:04.467 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.467 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:04.467 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.467 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.467 [2024-11-27 14:14:41.537337] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:04.467 [2024-11-27 14:14:41.537429] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.467 [2024-11-27 14:14:41.537462] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:04.467 [2024-11-27 14:14:41.537480] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.467 [2024-11-27 14:14:41.540232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.467 [2024-11-27 14:14:41.540281] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:04.467 BaseBdev2 00:15:04.467 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.467 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:04.467 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.467 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.467 spare_malloc 00:15:04.467 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.468 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:04.468 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.468 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.468 spare_delay 00:15:04.468 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.468 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:04.468 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.468 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.468 [2024-11-27 14:14:41.608168] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:04.468 [2024-11-27 14:14:41.608245] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.468 [2024-11-27 14:14:41.608276] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:15:04.468 [2024-11-27 14:14:41.608295] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.468 [2024-11-27 14:14:41.611160] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.468 [2024-11-27 14:14:41.611211] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:04.468 spare 00:15:04.468 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.468 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:15:04.468 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.468 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.468 [2024-11-27 14:14:41.616249] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:04.468 [2024-11-27 14:14:41.618671] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:04.468 [2024-11-27 14:14:41.618929] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:04.468 [2024-11-27 14:14:41.618965] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:04.468 [2024-11-27 14:14:41.619284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:15:04.468 [2024-11-27 14:14:41.619517] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:04.468 [2024-11-27 14:14:41.619540] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:04.468 [2024-11-27 14:14:41.619730] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:04.468 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.468 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:04.468 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:04.468 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:04.468 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:04.468 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:04.468 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:04.468 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:04.468 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:04.468 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:04.468 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:04.468 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:04.468 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.468 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:04.468 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:04.468 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.468 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:04.468 "name": "raid_bdev1", 00:15:04.468 "uuid": "4fefa594-c420-4349-86a5-83c3609abc40", 00:15:04.468 "strip_size_kb": 0, 00:15:04.468 "state": "online", 00:15:04.468 "raid_level": "raid1", 00:15:04.468 "superblock": true, 00:15:04.468 "num_base_bdevs": 2, 00:15:04.468 "num_base_bdevs_discovered": 2, 00:15:04.468 "num_base_bdevs_operational": 2, 00:15:04.468 "base_bdevs_list": [ 00:15:04.468 { 00:15:04.468 "name": "BaseBdev1", 00:15:04.468 "uuid": "733e78d7-4c7d-5182-8199-593e1af2168b", 00:15:04.468 "is_configured": true, 00:15:04.468 "data_offset": 2048, 00:15:04.468 "data_size": 63488 00:15:04.468 }, 00:15:04.468 { 00:15:04.468 "name": "BaseBdev2", 00:15:04.468 "uuid": "9bc66e8a-bdd9-5627-a458-18385531f899", 00:15:04.468 "is_configured": true, 00:15:04.468 "data_offset": 2048, 00:15:04.468 "data_size": 63488 00:15:04.468 } 00:15:04.468 ] 00:15:04.468 }' 00:15:04.468 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:04.468 14:14:41 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.035 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:05.035 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:05.035 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.035 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.035 [2024-11-27 14:14:42.136914] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:05.035 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.035 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:05.035 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.035 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.035 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.035 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:05.035 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.035 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:05.035 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:15:05.035 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:05.035 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:15:05.035 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.035 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.036 [2024-11-27 14:14:42.240484] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:05.036 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.036 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:05.036 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:05.036 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:05.036 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:05.036 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:05.036 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:05.036 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:05.036 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:05.036 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:05.036 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:05.036 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.036 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:05.036 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.036 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.036 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.036 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:05.036 "name": "raid_bdev1", 00:15:05.036 "uuid": "4fefa594-c420-4349-86a5-83c3609abc40", 00:15:05.036 "strip_size_kb": 0, 00:15:05.036 "state": "online", 00:15:05.036 "raid_level": "raid1", 00:15:05.036 "superblock": true, 00:15:05.036 "num_base_bdevs": 2, 00:15:05.036 "num_base_bdevs_discovered": 1, 00:15:05.036 "num_base_bdevs_operational": 1, 00:15:05.036 "base_bdevs_list": [ 00:15:05.036 { 00:15:05.036 "name": null, 00:15:05.036 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:05.036 "is_configured": false, 00:15:05.036 "data_offset": 0, 00:15:05.036 "data_size": 63488 00:15:05.036 }, 00:15:05.036 { 00:15:05.036 "name": "BaseBdev2", 00:15:05.036 "uuid": "9bc66e8a-bdd9-5627-a458-18385531f899", 00:15:05.036 "is_configured": true, 00:15:05.036 "data_offset": 2048, 00:15:05.036 "data_size": 63488 00:15:05.036 } 00:15:05.036 ] 00:15:05.036 }' 00:15:05.036 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:05.036 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.294 [2024-11-27 14:14:42.369098] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:15:05.294 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:05.294 Zero copy mechanism will not be used. 00:15:05.294 Running I/O for 60 seconds... 00:15:05.553 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:05.553 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.553 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:05.553 [2024-11-27 14:14:42.758409] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:05.553 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.553 14:14:42 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:05.812 [2024-11-27 14:14:42.838398] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:05.812 [2024-11-27 14:14:42.841064] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:05.812 [2024-11-27 14:14:42.961335] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:05.812 [2024-11-27 14:14:42.962033] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:06.070 [2024-11-27 14:14:43.172650] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:06.070 [2024-11-27 14:14:43.173134] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:06.587 162.00 IOPS, 486.00 MiB/s [2024-11-27T14:14:43.865Z] [2024-11-27 14:14:43.662890] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:06.587 [2024-11-27 14:14:43.663383] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:06.587 14:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:06.587 14:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:06.587 14:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:06.587 14:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:06.587 14:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:06.587 14:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:06.587 14:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.587 14:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.587 14:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.587 14:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.846 14:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:06.846 "name": "raid_bdev1", 00:15:06.846 "uuid": "4fefa594-c420-4349-86a5-83c3609abc40", 00:15:06.846 "strip_size_kb": 0, 00:15:06.846 "state": "online", 00:15:06.846 "raid_level": "raid1", 00:15:06.846 "superblock": true, 00:15:06.846 "num_base_bdevs": 2, 00:15:06.846 "num_base_bdevs_discovered": 2, 00:15:06.846 "num_base_bdevs_operational": 2, 00:15:06.846 "process": { 00:15:06.846 "type": "rebuild", 00:15:06.846 "target": "spare", 00:15:06.846 "progress": { 00:15:06.846 "blocks": 10240, 00:15:06.846 "percent": 16 00:15:06.846 } 00:15:06.846 }, 00:15:06.846 "base_bdevs_list": [ 00:15:06.846 { 00:15:06.846 "name": "spare", 00:15:06.846 "uuid": "8aa96eda-f56e-5fcd-ab08-6629c34f4ebf", 00:15:06.846 "is_configured": true, 00:15:06.846 "data_offset": 2048, 00:15:06.846 "data_size": 63488 00:15:06.846 }, 00:15:06.846 { 00:15:06.846 "name": "BaseBdev2", 00:15:06.846 "uuid": "9bc66e8a-bdd9-5627-a458-18385531f899", 00:15:06.846 "is_configured": true, 00:15:06.846 "data_offset": 2048, 00:15:06.846 "data_size": 63488 00:15:06.846 } 00:15:06.846 ] 00:15:06.846 }' 00:15:06.846 14:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:06.846 14:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:06.846 14:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:06.846 14:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:06.846 14:14:43 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:06.846 14:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.846 14:14:43 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:06.846 [2024-11-27 14:14:43.972691] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:06.846 [2024-11-27 14:14:44.002679] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:06.846 [2024-11-27 14:14:44.112049] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:06.847 [2024-11-27 14:14:44.122751] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:06.847 [2024-11-27 14:14:44.122838] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:06.847 [2024-11-27 14:14:44.122858] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:07.105 [2024-11-27 14:14:44.182279] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006080 00:15:07.105 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.105 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:07.105 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:07.105 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:07.105 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:07.105 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:07.105 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:07.105 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:07.105 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:07.106 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:07.106 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:07.106 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.106 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.106 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.106 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.106 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.106 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:07.106 "name": "raid_bdev1", 00:15:07.106 "uuid": "4fefa594-c420-4349-86a5-83c3609abc40", 00:15:07.106 "strip_size_kb": 0, 00:15:07.106 "state": "online", 00:15:07.106 "raid_level": "raid1", 00:15:07.106 "superblock": true, 00:15:07.106 "num_base_bdevs": 2, 00:15:07.106 "num_base_bdevs_discovered": 1, 00:15:07.106 "num_base_bdevs_operational": 1, 00:15:07.106 "base_bdevs_list": [ 00:15:07.106 { 00:15:07.106 "name": null, 00:15:07.106 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.106 "is_configured": false, 00:15:07.106 "data_offset": 0, 00:15:07.106 "data_size": 63488 00:15:07.106 }, 00:15:07.106 { 00:15:07.106 "name": "BaseBdev2", 00:15:07.106 "uuid": "9bc66e8a-bdd9-5627-a458-18385531f899", 00:15:07.106 "is_configured": true, 00:15:07.106 "data_offset": 2048, 00:15:07.106 "data_size": 63488 00:15:07.106 } 00:15:07.106 ] 00:15:07.106 }' 00:15:07.106 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:07.106 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.623 125.00 IOPS, 375.00 MiB/s [2024-11-27T14:14:44.901Z] 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:07.623 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:07.623 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:07.623 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:07.623 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:07.623 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:07.623 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.623 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.623 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.623 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.623 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:07.623 "name": "raid_bdev1", 00:15:07.623 "uuid": "4fefa594-c420-4349-86a5-83c3609abc40", 00:15:07.623 "strip_size_kb": 0, 00:15:07.623 "state": "online", 00:15:07.623 "raid_level": "raid1", 00:15:07.623 "superblock": true, 00:15:07.623 "num_base_bdevs": 2, 00:15:07.623 "num_base_bdevs_discovered": 1, 00:15:07.623 "num_base_bdevs_operational": 1, 00:15:07.623 "base_bdevs_list": [ 00:15:07.623 { 00:15:07.623 "name": null, 00:15:07.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:07.623 "is_configured": false, 00:15:07.623 "data_offset": 0, 00:15:07.623 "data_size": 63488 00:15:07.623 }, 00:15:07.623 { 00:15:07.623 "name": "BaseBdev2", 00:15:07.623 "uuid": "9bc66e8a-bdd9-5627-a458-18385531f899", 00:15:07.623 "is_configured": true, 00:15:07.623 "data_offset": 2048, 00:15:07.623 "data_size": 63488 00:15:07.623 } 00:15:07.623 ] 00:15:07.623 }' 00:15:07.623 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:07.623 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:07.623 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:07.623 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:07.623 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:07.623 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.623 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:07.623 [2024-11-27 14:14:44.888296] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:07.882 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.882 14:14:44 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:07.882 [2024-11-27 14:14:44.957633] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:15:07.882 [2024-11-27 14:14:44.960197] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:07.883 [2024-11-27 14:14:45.079784] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:07.883 [2024-11-27 14:14:45.080532] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:15:08.141 [2024-11-27 14:14:45.292632] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:08.141 [2024-11-27 14:14:45.293070] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:15:08.400 145.00 IOPS, 435.00 MiB/s [2024-11-27T14:14:45.678Z] [2024-11-27 14:14:45.579190] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:08.400 [2024-11-27 14:14:45.579951] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:15:08.658 [2024-11-27 14:14:45.799565] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:08.658 [2024-11-27 14:14:45.800050] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:15:08.917 14:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.917 14:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.917 14:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.917 14:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.917 14:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.917 14:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.917 14:14:45 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.917 14:14:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.917 14:14:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.917 14:14:45 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.917 14:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.917 "name": "raid_bdev1", 00:15:08.917 "uuid": "4fefa594-c420-4349-86a5-83c3609abc40", 00:15:08.917 "strip_size_kb": 0, 00:15:08.917 "state": "online", 00:15:08.917 "raid_level": "raid1", 00:15:08.917 "superblock": true, 00:15:08.917 "num_base_bdevs": 2, 00:15:08.917 "num_base_bdevs_discovered": 2, 00:15:08.917 "num_base_bdevs_operational": 2, 00:15:08.917 "process": { 00:15:08.917 "type": "rebuild", 00:15:08.917 "target": "spare", 00:15:08.917 "progress": { 00:15:08.917 "blocks": 10240, 00:15:08.918 "percent": 16 00:15:08.918 } 00:15:08.918 }, 00:15:08.918 "base_bdevs_list": [ 00:15:08.918 { 00:15:08.918 "name": "spare", 00:15:08.918 "uuid": "8aa96eda-f56e-5fcd-ab08-6629c34f4ebf", 00:15:08.918 "is_configured": true, 00:15:08.918 "data_offset": 2048, 00:15:08.918 "data_size": 63488 00:15:08.918 }, 00:15:08.918 { 00:15:08.918 "name": "BaseBdev2", 00:15:08.918 "uuid": "9bc66e8a-bdd9-5627-a458-18385531f899", 00:15:08.918 "is_configured": true, 00:15:08.918 "data_offset": 2048, 00:15:08.918 "data_size": 63488 00:15:08.918 } 00:15:08.918 ] 00:15:08.918 }' 00:15:08.918 14:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:08.918 14:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:08.918 14:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:08.918 14:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:08.918 14:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:15:08.918 14:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:15:08.918 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:15:08.918 14:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:15:08.918 14:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:08.918 14:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:15:08.918 14:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=453 00:15:08.918 14:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:08.918 14:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:08.918 14:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:08.918 14:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:08.918 14:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:08.918 14:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:08.918 14:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:08.918 14:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:08.918 14:14:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.918 14:14:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:08.918 14:14:46 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.918 [2024-11-27 14:14:46.167465] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:15:08.918 14:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:08.918 "name": "raid_bdev1", 00:15:08.918 "uuid": "4fefa594-c420-4349-86a5-83c3609abc40", 00:15:08.918 "strip_size_kb": 0, 00:15:08.918 "state": "online", 00:15:08.918 "raid_level": "raid1", 00:15:08.918 "superblock": true, 00:15:08.918 "num_base_bdevs": 2, 00:15:08.918 "num_base_bdevs_discovered": 2, 00:15:08.918 "num_base_bdevs_operational": 2, 00:15:08.918 "process": { 00:15:08.918 "type": "rebuild", 00:15:08.918 "target": "spare", 00:15:08.918 "progress": { 00:15:08.918 "blocks": 12288, 00:15:08.918 "percent": 19 00:15:08.918 } 00:15:08.918 }, 00:15:08.918 "base_bdevs_list": [ 00:15:08.918 { 00:15:08.918 "name": "spare", 00:15:08.918 "uuid": "8aa96eda-f56e-5fcd-ab08-6629c34f4ebf", 00:15:08.918 "is_configured": true, 00:15:08.918 "data_offset": 2048, 00:15:08.918 "data_size": 63488 00:15:08.918 }, 00:15:08.918 { 00:15:08.918 "name": "BaseBdev2", 00:15:08.918 "uuid": "9bc66e8a-bdd9-5627-a458-18385531f899", 00:15:08.918 "is_configured": true, 00:15:08.918 "data_offset": 2048, 00:15:08.918 "data_size": 63488 00:15:08.918 } 00:15:08.918 ] 00:15:08.918 }' 00:15:08.918 14:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:09.177 14:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:09.177 14:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:09.177 [2024-11-27 14:14:46.282459] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:09.177 [2024-11-27 14:14:46.282776] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:15:09.177 14:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:09.177 14:14:46 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:09.436 130.00 IOPS, 390.00 MiB/s [2024-11-27T14:14:46.714Z] [2024-11-27 14:14:46.599077] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:15:09.436 [2024-11-27 14:14:46.701476] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:09.436 [2024-11-27 14:14:46.701861] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:15:10.373 14:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:10.373 14:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:10.373 14:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:10.373 14:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:10.373 14:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:10.373 14:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:10.373 14:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:10.373 14:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:10.373 14:14:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.373 14:14:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:10.373 14:14:47 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.373 [2024-11-27 14:14:47.324335] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:15:10.373 14:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:10.373 "name": "raid_bdev1", 00:15:10.373 "uuid": "4fefa594-c420-4349-86a5-83c3609abc40", 00:15:10.373 "strip_size_kb": 0, 00:15:10.373 "state": "online", 00:15:10.373 "raid_level": "raid1", 00:15:10.373 "superblock": true, 00:15:10.373 "num_base_bdevs": 2, 00:15:10.373 "num_base_bdevs_discovered": 2, 00:15:10.373 "num_base_bdevs_operational": 2, 00:15:10.373 "process": { 00:15:10.373 "type": "rebuild", 00:15:10.373 "target": "spare", 00:15:10.373 "progress": { 00:15:10.373 "blocks": 30720, 00:15:10.373 "percent": 48 00:15:10.373 } 00:15:10.373 }, 00:15:10.373 "base_bdevs_list": [ 00:15:10.373 { 00:15:10.373 "name": "spare", 00:15:10.373 "uuid": "8aa96eda-f56e-5fcd-ab08-6629c34f4ebf", 00:15:10.373 "is_configured": true, 00:15:10.373 "data_offset": 2048, 00:15:10.373 "data_size": 63488 00:15:10.373 }, 00:15:10.373 { 00:15:10.373 "name": "BaseBdev2", 00:15:10.373 "uuid": "9bc66e8a-bdd9-5627-a458-18385531f899", 00:15:10.373 "is_configured": true, 00:15:10.373 "data_offset": 2048, 00:15:10.373 "data_size": 63488 00:15:10.373 } 00:15:10.373 ] 00:15:10.373 }' 00:15:10.373 14:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:10.373 119.20 IOPS, 357.60 MiB/s [2024-11-27T14:14:47.651Z] 14:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:10.373 14:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:10.373 14:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:10.373 14:14:47 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:10.373 [2024-11-27 14:14:47.535472] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:15:10.632 [2024-11-27 14:14:47.776966] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 38912 offset_begin: 36864 offset_end: 43008 00:15:10.891 [2024-11-27 14:14:48.004563] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:15:11.150 [2024-11-27 14:14:48.332873] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 45056 offset_begin: 43008 offset_end: 49152 00:15:11.410 106.33 IOPS, 319.00 MiB/s [2024-11-27T14:14:48.688Z] [2024-11-27 14:14:48.442448] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:15:11.410 14:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:11.410 14:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:11.410 14:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:11.410 14:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:11.410 14:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:11.410 14:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:11.410 14:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:11.410 14:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.410 14:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:11.410 14:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:11.410 14:14:48 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.410 14:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:11.410 "name": "raid_bdev1", 00:15:11.410 "uuid": "4fefa594-c420-4349-86a5-83c3609abc40", 00:15:11.410 "strip_size_kb": 0, 00:15:11.410 "state": "online", 00:15:11.410 "raid_level": "raid1", 00:15:11.410 "superblock": true, 00:15:11.410 "num_base_bdevs": 2, 00:15:11.410 "num_base_bdevs_discovered": 2, 00:15:11.410 "num_base_bdevs_operational": 2, 00:15:11.410 "process": { 00:15:11.410 "type": "rebuild", 00:15:11.410 "target": "spare", 00:15:11.410 "progress": { 00:15:11.410 "blocks": 47104, 00:15:11.410 "percent": 74 00:15:11.410 } 00:15:11.410 }, 00:15:11.410 "base_bdevs_list": [ 00:15:11.410 { 00:15:11.410 "name": "spare", 00:15:11.410 "uuid": "8aa96eda-f56e-5fcd-ab08-6629c34f4ebf", 00:15:11.410 "is_configured": true, 00:15:11.410 "data_offset": 2048, 00:15:11.410 "data_size": 63488 00:15:11.410 }, 00:15:11.410 { 00:15:11.410 "name": "BaseBdev2", 00:15:11.410 "uuid": "9bc66e8a-bdd9-5627-a458-18385531f899", 00:15:11.410 "is_configured": true, 00:15:11.410 "data_offset": 2048, 00:15:11.410 "data_size": 63488 00:15:11.410 } 00:15:11.410 ] 00:15:11.410 }' 00:15:11.410 14:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:11.410 14:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:11.410 14:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:11.410 14:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:11.410 14:14:48 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:11.980 [2024-11-27 14:14:49.218935] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:15:12.238 95.29 IOPS, 285.86 MiB/s [2024-11-27T14:14:49.516Z] [2024-11-27 14:14:49.449225] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:12.497 [2024-11-27 14:14:49.557447] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:12.497 [2024-11-27 14:14:49.560164] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:12.497 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:12.497 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:12.497 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.497 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:12.497 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:12.497 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.497 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.497 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.497 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.497 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.497 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.497 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.497 "name": "raid_bdev1", 00:15:12.497 "uuid": "4fefa594-c420-4349-86a5-83c3609abc40", 00:15:12.497 "strip_size_kb": 0, 00:15:12.497 "state": "online", 00:15:12.497 "raid_level": "raid1", 00:15:12.497 "superblock": true, 00:15:12.497 "num_base_bdevs": 2, 00:15:12.497 "num_base_bdevs_discovered": 2, 00:15:12.497 "num_base_bdevs_operational": 2, 00:15:12.497 "base_bdevs_list": [ 00:15:12.497 { 00:15:12.497 "name": "spare", 00:15:12.497 "uuid": "8aa96eda-f56e-5fcd-ab08-6629c34f4ebf", 00:15:12.497 "is_configured": true, 00:15:12.497 "data_offset": 2048, 00:15:12.497 "data_size": 63488 00:15:12.497 }, 00:15:12.497 { 00:15:12.497 "name": "BaseBdev2", 00:15:12.497 "uuid": "9bc66e8a-bdd9-5627-a458-18385531f899", 00:15:12.497 "is_configured": true, 00:15:12.497 "data_offset": 2048, 00:15:12.497 "data_size": 63488 00:15:12.497 } 00:15:12.497 ] 00:15:12.497 }' 00:15:12.497 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.497 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:12.497 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.497 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:12.497 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:15:12.498 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:12.805 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:12.805 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:12.805 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:12.805 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:12.805 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.805 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.805 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.805 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.805 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.805 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:12.805 "name": "raid_bdev1", 00:15:12.805 "uuid": "4fefa594-c420-4349-86a5-83c3609abc40", 00:15:12.805 "strip_size_kb": 0, 00:15:12.805 "state": "online", 00:15:12.805 "raid_level": "raid1", 00:15:12.805 "superblock": true, 00:15:12.805 "num_base_bdevs": 2, 00:15:12.805 "num_base_bdevs_discovered": 2, 00:15:12.805 "num_base_bdevs_operational": 2, 00:15:12.805 "base_bdevs_list": [ 00:15:12.805 { 00:15:12.805 "name": "spare", 00:15:12.805 "uuid": "8aa96eda-f56e-5fcd-ab08-6629c34f4ebf", 00:15:12.805 "is_configured": true, 00:15:12.805 "data_offset": 2048, 00:15:12.805 "data_size": 63488 00:15:12.805 }, 00:15:12.805 { 00:15:12.805 "name": "BaseBdev2", 00:15:12.805 "uuid": "9bc66e8a-bdd9-5627-a458-18385531f899", 00:15:12.805 "is_configured": true, 00:15:12.805 "data_offset": 2048, 00:15:12.805 "data_size": 63488 00:15:12.805 } 00:15:12.805 ] 00:15:12.805 }' 00:15:12.805 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:12.805 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:12.805 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:12.805 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:12.805 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:12.805 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:12.805 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:12.805 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:12.805 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:12.805 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:12.805 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:12.805 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:12.805 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:12.805 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:12.805 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:12.805 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:12.805 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.805 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:12.805 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.805 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:12.805 "name": "raid_bdev1", 00:15:12.805 "uuid": "4fefa594-c420-4349-86a5-83c3609abc40", 00:15:12.805 "strip_size_kb": 0, 00:15:12.805 "state": "online", 00:15:12.805 "raid_level": "raid1", 00:15:12.805 "superblock": true, 00:15:12.805 "num_base_bdevs": 2, 00:15:12.805 "num_base_bdevs_discovered": 2, 00:15:12.805 "num_base_bdevs_operational": 2, 00:15:12.805 "base_bdevs_list": [ 00:15:12.805 { 00:15:12.805 "name": "spare", 00:15:12.805 "uuid": "8aa96eda-f56e-5fcd-ab08-6629c34f4ebf", 00:15:12.805 "is_configured": true, 00:15:12.805 "data_offset": 2048, 00:15:12.805 "data_size": 63488 00:15:12.805 }, 00:15:12.805 { 00:15:12.805 "name": "BaseBdev2", 00:15:12.805 "uuid": "9bc66e8a-bdd9-5627-a458-18385531f899", 00:15:12.805 "is_configured": true, 00:15:12.805 "data_offset": 2048, 00:15:12.805 "data_size": 63488 00:15:12.805 } 00:15:12.805 ] 00:15:12.805 }' 00:15:12.805 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:12.805 14:14:49 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.373 88.00 IOPS, 264.00 MiB/s [2024-11-27T14:14:50.651Z] 14:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:13.373 14:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.373 14:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.373 [2024-11-27 14:14:50.437308] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:13.373 [2024-11-27 14:14:50.437345] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:13.373 00:15:13.373 Latency(us) 00:15:13.373 [2024-11-27T14:14:50.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:13.373 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:15:13.373 raid_bdev1 : 8.17 86.71 260.12 0.00 0.00 15360.87 288.58 119156.36 00:15:13.373 [2024-11-27T14:14:50.651Z] =================================================================================================================== 00:15:13.373 [2024-11-27T14:14:50.651Z] Total : 86.71 260.12 0.00 0.00 15360.87 288.58 119156.36 00:15:13.373 [2024-11-27 14:14:50.557224] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:13.373 [2024-11-27 14:14:50.557328] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.373 [2024-11-27 14:14:50.557443] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:13.373 [2024-11-27 14:14:50.557460] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:13.373 { 00:15:13.373 "results": [ 00:15:13.373 { 00:15:13.373 "job": "raid_bdev1", 00:15:13.373 "core_mask": "0x1", 00:15:13.373 "workload": "randrw", 00:15:13.373 "percentage": 50, 00:15:13.373 "status": "finished", 00:15:13.373 "queue_depth": 2, 00:15:13.373 "io_size": 3145728, 00:15:13.373 "runtime": 8.165484, 00:15:13.373 "iops": 86.7064340582873, 00:15:13.373 "mibps": 260.1193021748619, 00:15:13.373 "io_failed": 0, 00:15:13.373 "io_timeout": 0, 00:15:13.373 "avg_latency_us": 15360.874370826912, 00:15:13.373 "min_latency_us": 288.58181818181816, 00:15:13.373 "max_latency_us": 119156.36363636363 00:15:13.373 } 00:15:13.373 ], 00:15:13.373 "core_count": 1 00:15:13.373 } 00:15:13.373 14:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.373 14:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:13.373 14:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.373 14:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:13.373 14:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:15:13.373 14:14:50 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.373 14:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:13.373 14:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:13.373 14:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:15:13.373 14:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:15:13.373 14:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:13.373 14:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:15:13.373 14:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:13.373 14:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:13.373 14:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:13.373 14:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:13.373 14:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:13.373 14:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:13.373 14:14:50 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:15:13.940 /dev/nbd0 00:15:13.940 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:13.940 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:13.940 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:13.940 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:13.940 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:13.940 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:13.940 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:13.940 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:13.940 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:13.940 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:13.940 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:13.940 1+0 records in 00:15:13.940 1+0 records out 00:15:13.940 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000438487 s, 9.3 MB/s 00:15:13.940 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.940 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:13.940 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:13.940 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:13.940 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:13.940 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:13.940 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:13.940 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:15:13.940 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev2 ']' 00:15:13.940 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev2 /dev/nbd1 00:15:13.940 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:13.940 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev2') 00:15:13.940 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:13.940 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:15:13.940 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:13.940 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:15:13.940 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:13.940 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:13.940 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev2 /dev/nbd1 00:15:14.199 /dev/nbd1 00:15:14.199 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:14.199 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:14.199 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:14.199 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:15:14.199 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:14.199 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:14.199 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:14.199 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:15:14.199 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:14.199 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:14.199 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:14.199 1+0 records in 00:15:14.199 1+0 records out 00:15:14.199 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000327946 s, 12.5 MB/s 00:15:14.199 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.199 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:15:14.199 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:14.199 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:14.199 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:15:14.199 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:14.199 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:14.199 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:15:14.458 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:15:14.458 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:14.458 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:15:14.458 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:14.458 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:14.458 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:14.458 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:14.717 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:14.717 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:14.717 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:14.717 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:14.717 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:14.717 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:14.717 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:14.717 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:14.717 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:14.717 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:14.717 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:14.717 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:14.717 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:15:14.717 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:14.717 14:14:51 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:14.975 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:14.975 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:14.975 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:14.975 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:14.975 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:14.975 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:14.975 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:15:14.975 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:15:14.975 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:15:14.975 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:15:14.975 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.976 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.976 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.976 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:14.976 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.976 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:14.976 [2024-11-27 14:14:52.217315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:14.976 [2024-11-27 14:14:52.217389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:14.976 [2024-11-27 14:14:52.217430] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:15:14.976 [2024-11-27 14:14:52.217446] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:14.976 [2024-11-27 14:14:52.220507] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:14.976 [2024-11-27 14:14:52.220554] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:14.976 [2024-11-27 14:14:52.220682] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:14.976 [2024-11-27 14:14:52.220744] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:14.976 [2024-11-27 14:14:52.220955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:14.976 spare 00:15:14.976 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.976 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:15:14.976 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.976 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.234 [2024-11-27 14:14:52.321087] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:15:15.234 [2024-11-27 14:14:52.321161] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:15.234 [2024-11-27 14:14:52.321587] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b0d0 00:15:15.234 [2024-11-27 14:14:52.321896] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:15:15.234 [2024-11-27 14:14:52.321915] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:15:15.234 [2024-11-27 14:14:52.322192] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:15.234 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.234 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:15:15.234 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:15.234 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:15.234 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:15.234 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:15.234 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:15:15.234 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:15.234 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:15.234 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:15.234 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:15.234 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.234 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.234 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.234 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.234 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.234 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:15.234 "name": "raid_bdev1", 00:15:15.234 "uuid": "4fefa594-c420-4349-86a5-83c3609abc40", 00:15:15.234 "strip_size_kb": 0, 00:15:15.234 "state": "online", 00:15:15.234 "raid_level": "raid1", 00:15:15.234 "superblock": true, 00:15:15.234 "num_base_bdevs": 2, 00:15:15.234 "num_base_bdevs_discovered": 2, 00:15:15.234 "num_base_bdevs_operational": 2, 00:15:15.234 "base_bdevs_list": [ 00:15:15.234 { 00:15:15.234 "name": "spare", 00:15:15.234 "uuid": "8aa96eda-f56e-5fcd-ab08-6629c34f4ebf", 00:15:15.234 "is_configured": true, 00:15:15.234 "data_offset": 2048, 00:15:15.234 "data_size": 63488 00:15:15.234 }, 00:15:15.234 { 00:15:15.234 "name": "BaseBdev2", 00:15:15.234 "uuid": "9bc66e8a-bdd9-5627-a458-18385531f899", 00:15:15.234 "is_configured": true, 00:15:15.234 "data_offset": 2048, 00:15:15.234 "data_size": 63488 00:15:15.234 } 00:15:15.234 ] 00:15:15.234 }' 00:15:15.234 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:15.234 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.801 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:15.801 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:15.801 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:15.801 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:15.801 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:15.801 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.801 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.801 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.801 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.801 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.801 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:15.801 "name": "raid_bdev1", 00:15:15.801 "uuid": "4fefa594-c420-4349-86a5-83c3609abc40", 00:15:15.801 "strip_size_kb": 0, 00:15:15.801 "state": "online", 00:15:15.801 "raid_level": "raid1", 00:15:15.801 "superblock": true, 00:15:15.801 "num_base_bdevs": 2, 00:15:15.801 "num_base_bdevs_discovered": 2, 00:15:15.801 "num_base_bdevs_operational": 2, 00:15:15.801 "base_bdevs_list": [ 00:15:15.801 { 00:15:15.801 "name": "spare", 00:15:15.801 "uuid": "8aa96eda-f56e-5fcd-ab08-6629c34f4ebf", 00:15:15.801 "is_configured": true, 00:15:15.801 "data_offset": 2048, 00:15:15.801 "data_size": 63488 00:15:15.801 }, 00:15:15.801 { 00:15:15.801 "name": "BaseBdev2", 00:15:15.801 "uuid": "9bc66e8a-bdd9-5627-a458-18385531f899", 00:15:15.801 "is_configured": true, 00:15:15.801 "data_offset": 2048, 00:15:15.801 "data_size": 63488 00:15:15.801 } 00:15:15.801 ] 00:15:15.801 }' 00:15:15.801 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:15.801 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:15.801 14:14:52 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:15.801 14:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:15.801 14:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:15.801 14:14:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.801 14:14:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:15.801 14:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:15:15.801 14:14:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.060 14:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:15:16.060 14:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:16.060 14:14:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.060 14:14:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.060 [2024-11-27 14:14:53.086500] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:16.060 14:14:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.060 14:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:16.060 14:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:16.060 14:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:16.060 14:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:16.060 14:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:16.060 14:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:16.060 14:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:16.060 14:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:16.060 14:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:16.060 14:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:16.060 14:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:16.060 14:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:16.060 14:14:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.060 14:14:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.060 14:14:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.060 14:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:16.060 "name": "raid_bdev1", 00:15:16.060 "uuid": "4fefa594-c420-4349-86a5-83c3609abc40", 00:15:16.060 "strip_size_kb": 0, 00:15:16.060 "state": "online", 00:15:16.060 "raid_level": "raid1", 00:15:16.060 "superblock": true, 00:15:16.060 "num_base_bdevs": 2, 00:15:16.060 "num_base_bdevs_discovered": 1, 00:15:16.060 "num_base_bdevs_operational": 1, 00:15:16.060 "base_bdevs_list": [ 00:15:16.060 { 00:15:16.060 "name": null, 00:15:16.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:16.060 "is_configured": false, 00:15:16.060 "data_offset": 0, 00:15:16.060 "data_size": 63488 00:15:16.060 }, 00:15:16.060 { 00:15:16.060 "name": "BaseBdev2", 00:15:16.060 "uuid": "9bc66e8a-bdd9-5627-a458-18385531f899", 00:15:16.060 "is_configured": true, 00:15:16.060 "data_offset": 2048, 00:15:16.060 "data_size": 63488 00:15:16.060 } 00:15:16.060 ] 00:15:16.060 }' 00:15:16.060 14:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:16.060 14:14:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.628 14:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:16.628 14:14:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.628 14:14:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:16.628 [2024-11-27 14:14:53.606840] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:16.628 [2024-11-27 14:14:53.607082] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:16.628 [2024-11-27 14:14:53.607108] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:16.628 [2024-11-27 14:14:53.607158] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:16.628 [2024-11-27 14:14:53.624351] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b1a0 00:15:16.628 14:14:53 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.628 14:14:53 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:15:16.628 [2024-11-27 14:14:53.627047] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:17.589 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:17.589 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:17.589 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:17.589 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:17.589 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:17.589 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.589 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.589 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.589 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.589 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.589 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:17.589 "name": "raid_bdev1", 00:15:17.589 "uuid": "4fefa594-c420-4349-86a5-83c3609abc40", 00:15:17.589 "strip_size_kb": 0, 00:15:17.589 "state": "online", 00:15:17.589 "raid_level": "raid1", 00:15:17.589 "superblock": true, 00:15:17.589 "num_base_bdevs": 2, 00:15:17.589 "num_base_bdevs_discovered": 2, 00:15:17.590 "num_base_bdevs_operational": 2, 00:15:17.590 "process": { 00:15:17.590 "type": "rebuild", 00:15:17.590 "target": "spare", 00:15:17.590 "progress": { 00:15:17.590 "blocks": 20480, 00:15:17.590 "percent": 32 00:15:17.590 } 00:15:17.590 }, 00:15:17.590 "base_bdevs_list": [ 00:15:17.590 { 00:15:17.590 "name": "spare", 00:15:17.590 "uuid": "8aa96eda-f56e-5fcd-ab08-6629c34f4ebf", 00:15:17.590 "is_configured": true, 00:15:17.590 "data_offset": 2048, 00:15:17.590 "data_size": 63488 00:15:17.590 }, 00:15:17.590 { 00:15:17.590 "name": "BaseBdev2", 00:15:17.590 "uuid": "9bc66e8a-bdd9-5627-a458-18385531f899", 00:15:17.590 "is_configured": true, 00:15:17.590 "data_offset": 2048, 00:15:17.590 "data_size": 63488 00:15:17.590 } 00:15:17.590 ] 00:15:17.590 }' 00:15:17.590 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:17.590 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:17.590 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:17.590 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:17.590 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:15:17.590 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.590 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.590 [2024-11-27 14:14:54.805075] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:17.590 [2024-11-27 14:14:54.836887] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:17.590 [2024-11-27 14:14:54.837012] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:17.590 [2024-11-27 14:14:54.837038] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:17.590 [2024-11-27 14:14:54.837052] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:17.850 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.850 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:17.850 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:17.850 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:17.850 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:17.850 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:17.850 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:17.850 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:17.850 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:17.850 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:17.850 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:17.850 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:17.850 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:17.850 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.850 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:17.850 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.850 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:17.850 "name": "raid_bdev1", 00:15:17.850 "uuid": "4fefa594-c420-4349-86a5-83c3609abc40", 00:15:17.850 "strip_size_kb": 0, 00:15:17.850 "state": "online", 00:15:17.850 "raid_level": "raid1", 00:15:17.850 "superblock": true, 00:15:17.850 "num_base_bdevs": 2, 00:15:17.850 "num_base_bdevs_discovered": 1, 00:15:17.850 "num_base_bdevs_operational": 1, 00:15:17.850 "base_bdevs_list": [ 00:15:17.850 { 00:15:17.850 "name": null, 00:15:17.850 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:17.850 "is_configured": false, 00:15:17.850 "data_offset": 0, 00:15:17.850 "data_size": 63488 00:15:17.850 }, 00:15:17.850 { 00:15:17.850 "name": "BaseBdev2", 00:15:17.850 "uuid": "9bc66e8a-bdd9-5627-a458-18385531f899", 00:15:17.850 "is_configured": true, 00:15:17.850 "data_offset": 2048, 00:15:17.850 "data_size": 63488 00:15:17.850 } 00:15:17.850 ] 00:15:17.850 }' 00:15:17.850 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:17.850 14:14:54 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.418 14:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:18.418 14:14:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.418 14:14:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:18.418 [2024-11-27 14:14:55.409085] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:18.418 [2024-11-27 14:14:55.409200] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.419 [2024-11-27 14:14:55.409232] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:15:18.419 [2024-11-27 14:14:55.409249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.419 [2024-11-27 14:14:55.409906] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.419 [2024-11-27 14:14:55.409946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:18.419 [2024-11-27 14:14:55.410066] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:15:18.419 [2024-11-27 14:14:55.410091] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:15:18.419 [2024-11-27 14:14:55.410105] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:15:18.419 [2024-11-27 14:14:55.410151] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:18.419 [2024-11-27 14:14:55.427132] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b270 00:15:18.419 spare 00:15:18.419 14:14:55 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.419 14:14:55 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:15:18.419 [2024-11-27 14:14:55.429972] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:19.353 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:19.353 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:19.353 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:19.353 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:19.353 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:19.353 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.353 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.353 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.353 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.353 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.353 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:19.353 "name": "raid_bdev1", 00:15:19.353 "uuid": "4fefa594-c420-4349-86a5-83c3609abc40", 00:15:19.353 "strip_size_kb": 0, 00:15:19.353 "state": "online", 00:15:19.353 "raid_level": "raid1", 00:15:19.353 "superblock": true, 00:15:19.353 "num_base_bdevs": 2, 00:15:19.353 "num_base_bdevs_discovered": 2, 00:15:19.353 "num_base_bdevs_operational": 2, 00:15:19.353 "process": { 00:15:19.353 "type": "rebuild", 00:15:19.353 "target": "spare", 00:15:19.353 "progress": { 00:15:19.354 "blocks": 20480, 00:15:19.354 "percent": 32 00:15:19.354 } 00:15:19.354 }, 00:15:19.354 "base_bdevs_list": [ 00:15:19.354 { 00:15:19.354 "name": "spare", 00:15:19.354 "uuid": "8aa96eda-f56e-5fcd-ab08-6629c34f4ebf", 00:15:19.354 "is_configured": true, 00:15:19.354 "data_offset": 2048, 00:15:19.354 "data_size": 63488 00:15:19.354 }, 00:15:19.354 { 00:15:19.354 "name": "BaseBdev2", 00:15:19.354 "uuid": "9bc66e8a-bdd9-5627-a458-18385531f899", 00:15:19.354 "is_configured": true, 00:15:19.354 "data_offset": 2048, 00:15:19.354 "data_size": 63488 00:15:19.354 } 00:15:19.354 ] 00:15:19.354 }' 00:15:19.354 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:19.354 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:19.354 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:19.354 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:19.354 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:15:19.354 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.354 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.354 [2024-11-27 14:14:56.616109] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:19.613 [2024-11-27 14:14:56.639917] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:19.613 [2024-11-27 14:14:56.639993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:19.613 [2024-11-27 14:14:56.640022] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:19.613 [2024-11-27 14:14:56.640033] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:19.613 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.613 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:19.613 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:19.613 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:19.613 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:19.613 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:19.613 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:19.613 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:19.613 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:19.613 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:19.613 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:19.613 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:19.613 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:19.613 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.613 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:19.613 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.613 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:19.613 "name": "raid_bdev1", 00:15:19.613 "uuid": "4fefa594-c420-4349-86a5-83c3609abc40", 00:15:19.613 "strip_size_kb": 0, 00:15:19.613 "state": "online", 00:15:19.613 "raid_level": "raid1", 00:15:19.613 "superblock": true, 00:15:19.613 "num_base_bdevs": 2, 00:15:19.613 "num_base_bdevs_discovered": 1, 00:15:19.613 "num_base_bdevs_operational": 1, 00:15:19.613 "base_bdevs_list": [ 00:15:19.613 { 00:15:19.613 "name": null, 00:15:19.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:19.613 "is_configured": false, 00:15:19.613 "data_offset": 0, 00:15:19.613 "data_size": 63488 00:15:19.613 }, 00:15:19.613 { 00:15:19.613 "name": "BaseBdev2", 00:15:19.613 "uuid": "9bc66e8a-bdd9-5627-a458-18385531f899", 00:15:19.613 "is_configured": true, 00:15:19.613 "data_offset": 2048, 00:15:19.613 "data_size": 63488 00:15:19.613 } 00:15:19.613 ] 00:15:19.613 }' 00:15:19.613 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:19.613 14:14:56 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.181 14:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:20.181 14:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:20.181 14:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:20.181 14:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:20.181 14:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:20.181 14:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:20.181 14:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.181 14:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.181 14:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.181 14:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.181 14:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:20.181 "name": "raid_bdev1", 00:15:20.181 "uuid": "4fefa594-c420-4349-86a5-83c3609abc40", 00:15:20.181 "strip_size_kb": 0, 00:15:20.181 "state": "online", 00:15:20.181 "raid_level": "raid1", 00:15:20.181 "superblock": true, 00:15:20.181 "num_base_bdevs": 2, 00:15:20.181 "num_base_bdevs_discovered": 1, 00:15:20.181 "num_base_bdevs_operational": 1, 00:15:20.181 "base_bdevs_list": [ 00:15:20.181 { 00:15:20.181 "name": null, 00:15:20.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:20.181 "is_configured": false, 00:15:20.181 "data_offset": 0, 00:15:20.181 "data_size": 63488 00:15:20.181 }, 00:15:20.181 { 00:15:20.181 "name": "BaseBdev2", 00:15:20.181 "uuid": "9bc66e8a-bdd9-5627-a458-18385531f899", 00:15:20.181 "is_configured": true, 00:15:20.181 "data_offset": 2048, 00:15:20.181 "data_size": 63488 00:15:20.181 } 00:15:20.181 ] 00:15:20.181 }' 00:15:20.181 14:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:20.181 14:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:20.181 14:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:20.181 14:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:20.181 14:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:15:20.181 14:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.181 14:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.181 14:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.181 14:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:20.181 14:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.181 14:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:20.181 [2024-11-27 14:14:57.421855] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:20.181 [2024-11-27 14:14:57.421938] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.181 [2024-11-27 14:14:57.421981] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:15:20.181 [2024-11-27 14:14:57.422000] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.181 [2024-11-27 14:14:57.422608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.181 [2024-11-27 14:14:57.422652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:20.181 [2024-11-27 14:14:57.422760] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:15:20.181 [2024-11-27 14:14:57.422804] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:20.181 [2024-11-27 14:14:57.422838] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:20.181 [2024-11-27 14:14:57.422852] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:15:20.181 BaseBdev1 00:15:20.181 14:14:57 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.181 14:14:57 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:15:21.558 14:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:21.558 14:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:21.558 14:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:21.558 14:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:21.558 14:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:21.558 14:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:21.558 14:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:21.558 14:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:21.558 14:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:21.558 14:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:21.558 14:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.558 14:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.558 14:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.558 14:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.558 14:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.558 14:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:21.558 "name": "raid_bdev1", 00:15:21.558 "uuid": "4fefa594-c420-4349-86a5-83c3609abc40", 00:15:21.558 "strip_size_kb": 0, 00:15:21.558 "state": "online", 00:15:21.558 "raid_level": "raid1", 00:15:21.558 "superblock": true, 00:15:21.558 "num_base_bdevs": 2, 00:15:21.558 "num_base_bdevs_discovered": 1, 00:15:21.558 "num_base_bdevs_operational": 1, 00:15:21.558 "base_bdevs_list": [ 00:15:21.558 { 00:15:21.558 "name": null, 00:15:21.558 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.558 "is_configured": false, 00:15:21.558 "data_offset": 0, 00:15:21.558 "data_size": 63488 00:15:21.558 }, 00:15:21.558 { 00:15:21.558 "name": "BaseBdev2", 00:15:21.558 "uuid": "9bc66e8a-bdd9-5627-a458-18385531f899", 00:15:21.559 "is_configured": true, 00:15:21.559 "data_offset": 2048, 00:15:21.559 "data_size": 63488 00:15:21.559 } 00:15:21.559 ] 00:15:21.559 }' 00:15:21.559 14:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:21.559 14:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.819 14:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:21.819 14:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:21.819 14:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:21.819 14:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:21.819 14:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:21.819 14:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:21.819 14:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.819 14:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:21.819 14:14:58 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:21.819 14:14:58 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.819 14:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:21.819 "name": "raid_bdev1", 00:15:21.819 "uuid": "4fefa594-c420-4349-86a5-83c3609abc40", 00:15:21.819 "strip_size_kb": 0, 00:15:21.819 "state": "online", 00:15:21.819 "raid_level": "raid1", 00:15:21.819 "superblock": true, 00:15:21.819 "num_base_bdevs": 2, 00:15:21.819 "num_base_bdevs_discovered": 1, 00:15:21.819 "num_base_bdevs_operational": 1, 00:15:21.819 "base_bdevs_list": [ 00:15:21.819 { 00:15:21.819 "name": null, 00:15:21.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:21.819 "is_configured": false, 00:15:21.819 "data_offset": 0, 00:15:21.819 "data_size": 63488 00:15:21.819 }, 00:15:21.819 { 00:15:21.819 "name": "BaseBdev2", 00:15:21.819 "uuid": "9bc66e8a-bdd9-5627-a458-18385531f899", 00:15:21.819 "is_configured": true, 00:15:21.819 "data_offset": 2048, 00:15:21.819 "data_size": 63488 00:15:21.819 } 00:15:21.819 ] 00:15:21.819 }' 00:15:21.819 14:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:21.819 14:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:21.819 14:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:22.079 14:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:22.079 14:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:22.079 14:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:15:22.079 14:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:22.079 14:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:15:22.079 14:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:22.079 14:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:15:22.079 14:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:15:22.079 14:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:15:22.079 14:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.079 14:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:22.079 [2024-11-27 14:14:59.174705] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:22.079 [2024-11-27 14:14:59.174920] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:15:22.079 [2024-11-27 14:14:59.175086] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:15:22.079 request: 00:15:22.079 { 00:15:22.079 "base_bdev": "BaseBdev1", 00:15:22.079 "raid_bdev": "raid_bdev1", 00:15:22.079 "method": "bdev_raid_add_base_bdev", 00:15:22.079 "req_id": 1 00:15:22.079 } 00:15:22.079 Got JSON-RPC error response 00:15:22.079 response: 00:15:22.079 { 00:15:22.079 "code": -22, 00:15:22.079 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:15:22.079 } 00:15:22.079 14:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:15:22.079 14:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:15:22.079 14:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:15:22.079 14:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:15:22.079 14:14:59 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:15:22.079 14:14:59 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:15:23.030 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:15:23.030 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:23.030 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:23.030 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:23.030 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:23.030 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:15:23.030 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:23.030 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:23.030 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:23.030 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:23.030 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.030 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.030 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.030 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.030 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.030 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:23.030 "name": "raid_bdev1", 00:15:23.030 "uuid": "4fefa594-c420-4349-86a5-83c3609abc40", 00:15:23.030 "strip_size_kb": 0, 00:15:23.030 "state": "online", 00:15:23.030 "raid_level": "raid1", 00:15:23.030 "superblock": true, 00:15:23.030 "num_base_bdevs": 2, 00:15:23.030 "num_base_bdevs_discovered": 1, 00:15:23.030 "num_base_bdevs_operational": 1, 00:15:23.030 "base_bdevs_list": [ 00:15:23.030 { 00:15:23.030 "name": null, 00:15:23.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.030 "is_configured": false, 00:15:23.030 "data_offset": 0, 00:15:23.030 "data_size": 63488 00:15:23.030 }, 00:15:23.030 { 00:15:23.030 "name": "BaseBdev2", 00:15:23.030 "uuid": "9bc66e8a-bdd9-5627-a458-18385531f899", 00:15:23.030 "is_configured": true, 00:15:23.030 "data_offset": 2048, 00:15:23.030 "data_size": 63488 00:15:23.030 } 00:15:23.030 ] 00:15:23.030 }' 00:15:23.030 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:23.030 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.597 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:23.597 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:23.597 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:23.597 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:23.598 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:23.598 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:23.598 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:23.598 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.598 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:23.598 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.598 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:23.598 "name": "raid_bdev1", 00:15:23.598 "uuid": "4fefa594-c420-4349-86a5-83c3609abc40", 00:15:23.598 "strip_size_kb": 0, 00:15:23.598 "state": "online", 00:15:23.598 "raid_level": "raid1", 00:15:23.598 "superblock": true, 00:15:23.598 "num_base_bdevs": 2, 00:15:23.598 "num_base_bdevs_discovered": 1, 00:15:23.598 "num_base_bdevs_operational": 1, 00:15:23.598 "base_bdevs_list": [ 00:15:23.598 { 00:15:23.598 "name": null, 00:15:23.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:23.598 "is_configured": false, 00:15:23.598 "data_offset": 0, 00:15:23.598 "data_size": 63488 00:15:23.598 }, 00:15:23.598 { 00:15:23.598 "name": "BaseBdev2", 00:15:23.598 "uuid": "9bc66e8a-bdd9-5627-a458-18385531f899", 00:15:23.598 "is_configured": true, 00:15:23.598 "data_offset": 2048, 00:15:23.598 "data_size": 63488 00:15:23.598 } 00:15:23.598 ] 00:15:23.598 }' 00:15:23.598 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:23.598 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:23.598 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:23.598 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:23.598 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 76986 00:15:23.598 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 76986 ']' 00:15:23.598 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 76986 00:15:23.598 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:15:23.598 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:23.598 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76986 00:15:23.598 killing process with pid 76986 00:15:23.598 Received shutdown signal, test time was about 18.493621 seconds 00:15:23.598 00:15:23.598 Latency(us) 00:15:23.598 [2024-11-27T14:15:00.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:23.598 [2024-11-27T14:15:00.876Z] =================================================================================================================== 00:15:23.598 [2024-11-27T14:15:00.876Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:15:23.598 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:23.598 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:23.598 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76986' 00:15:23.598 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 76986 00:15:23.598 [2024-11-27 14:15:00.865364] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:23.598 14:15:00 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 76986 00:15:23.598 [2024-11-27 14:15:00.865523] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:23.598 [2024-11-27 14:15:00.865595] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:23.598 [2024-11-27 14:15:00.865615] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:15:23.857 [2024-11-27 14:15:01.072593] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:25.236 14:15:02 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:15:25.236 00:15:25.236 real 0m21.816s 00:15:25.236 user 0m29.847s 00:15:25.236 sys 0m1.989s 00:15:25.236 14:15:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:25.236 14:15:02 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:15:25.236 ************************************ 00:15:25.236 END TEST raid_rebuild_test_sb_io 00:15:25.236 ************************************ 00:15:25.236 14:15:02 bdev_raid -- bdev/bdev_raid.sh@977 -- # for n in 2 4 00:15:25.236 14:15:02 bdev_raid -- bdev/bdev_raid.sh@978 -- # run_test raid_rebuild_test raid_rebuild_test raid1 4 false false true 00:15:25.236 14:15:02 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:25.236 14:15:02 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:25.236 14:15:02 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:25.236 ************************************ 00:15:25.236 START TEST raid_rebuild_test 00:15:25.236 ************************************ 00:15:25.236 14:15:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false false true 00:15:25.236 14:15:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:25.236 14:15:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:25.236 14:15:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:15:25.236 14:15:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:25.236 14:15:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:25.236 14:15:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:25.236 14:15:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:25.236 14:15:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:25.236 14:15:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:25.236 14:15:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:25.236 14:15:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:25.236 14:15:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:25.236 14:15:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:25.236 14:15:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:25.236 14:15:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:25.236 14:15:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:25.236 14:15:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:25.236 14:15:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:25.236 14:15:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:25.236 14:15:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:25.236 14:15:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:25.236 14:15:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:25.236 14:15:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:25.237 14:15:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:25.237 14:15:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:25.237 14:15:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:25.237 14:15:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:25.237 14:15:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:25.237 14:15:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:15:25.237 14:15:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=77689 00:15:25.237 14:15:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 77689 00:15:25.237 14:15:02 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:25.237 14:15:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 77689 ']' 00:15:25.237 14:15:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:25.237 14:15:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:25.237 14:15:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:25.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:25.237 14:15:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:25.237 14:15:02 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.237 [2024-11-27 14:15:02.340510] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:15:25.237 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:25.237 Zero copy mechanism will not be used. 00:15:25.237 [2024-11-27 14:15:02.340914] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77689 ] 00:15:25.495 [2024-11-27 14:15:02.525368] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:25.495 [2024-11-27 14:15:02.656604] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:25.753 [2024-11-27 14:15:02.860141] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:25.753 [2024-11-27 14:15:02.860220] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:26.323 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:26.323 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:15:26.323 14:15:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:26.323 14:15:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:26.323 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.323 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.323 BaseBdev1_malloc 00:15:26.323 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.323 14:15:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:26.324 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.324 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.324 [2024-11-27 14:15:03.414098] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:26.324 [2024-11-27 14:15:03.414176] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.324 [2024-11-27 14:15:03.414210] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:26.324 [2024-11-27 14:15:03.414230] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.324 [2024-11-27 14:15:03.417021] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.324 [2024-11-27 14:15:03.417075] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:26.324 BaseBdev1 00:15:26.324 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.324 14:15:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:26.324 14:15:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:26.324 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.324 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.324 BaseBdev2_malloc 00:15:26.324 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.324 14:15:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:26.324 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.324 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.324 [2024-11-27 14:15:03.465951] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:26.324 [2024-11-27 14:15:03.466036] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.324 [2024-11-27 14:15:03.466071] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:26.324 [2024-11-27 14:15:03.466091] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.324 [2024-11-27 14:15:03.468941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.324 [2024-11-27 14:15:03.468992] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:26.324 BaseBdev2 00:15:26.324 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.324 14:15:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:26.324 14:15:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:26.324 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.324 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.324 BaseBdev3_malloc 00:15:26.324 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.324 14:15:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:26.324 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.324 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.324 [2024-11-27 14:15:03.529821] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:26.324 [2024-11-27 14:15:03.529898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.324 [2024-11-27 14:15:03.529931] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:26.324 [2024-11-27 14:15:03.529950] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.324 [2024-11-27 14:15:03.532724] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.324 [2024-11-27 14:15:03.532943] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:26.324 BaseBdev3 00:15:26.324 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.324 14:15:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:26.324 14:15:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:26.324 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.324 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.324 BaseBdev4_malloc 00:15:26.324 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.324 14:15:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:26.324 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.324 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.324 [2024-11-27 14:15:03.578117] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:26.324 [2024-11-27 14:15:03.578198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.324 [2024-11-27 14:15:03.578230] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:26.324 [2024-11-27 14:15:03.578249] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.324 [2024-11-27 14:15:03.581008] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.324 [2024-11-27 14:15:03.581064] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:26.324 BaseBdev4 00:15:26.324 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.324 14:15:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:26.324 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.324 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.582 spare_malloc 00:15:26.582 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.582 14:15:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:26.582 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.582 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.582 spare_delay 00:15:26.582 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.582 14:15:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:26.582 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.582 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.582 [2024-11-27 14:15:03.634286] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:26.582 [2024-11-27 14:15:03.634358] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:26.583 [2024-11-27 14:15:03.634387] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:26.583 [2024-11-27 14:15:03.634405] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:26.583 [2024-11-27 14:15:03.637169] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:26.583 [2024-11-27 14:15:03.637220] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:26.583 spare 00:15:26.583 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.583 14:15:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:26.583 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.583 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.583 [2024-11-27 14:15:03.642334] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:26.583 [2024-11-27 14:15:03.644755] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:26.583 [2024-11-27 14:15:03.644869] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:26.583 [2024-11-27 14:15:03.644955] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:26.583 [2024-11-27 14:15:03.645072] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:26.583 [2024-11-27 14:15:03.645096] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:26.583 [2024-11-27 14:15:03.645429] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:26.583 [2024-11-27 14:15:03.645651] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:26.583 [2024-11-27 14:15:03.645672] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:26.583 [2024-11-27 14:15:03.645906] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:26.583 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.583 14:15:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:26.583 14:15:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:26.583 14:15:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:26.583 14:15:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:26.583 14:15:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:26.583 14:15:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:26.583 14:15:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:26.583 14:15:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:26.583 14:15:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:26.583 14:15:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:26.583 14:15:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:26.583 14:15:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:26.583 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.583 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:26.583 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.583 14:15:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:26.583 "name": "raid_bdev1", 00:15:26.583 "uuid": "8550a416-133d-40f4-bfce-11e8bfd8f91d", 00:15:26.583 "strip_size_kb": 0, 00:15:26.583 "state": "online", 00:15:26.583 "raid_level": "raid1", 00:15:26.583 "superblock": false, 00:15:26.583 "num_base_bdevs": 4, 00:15:26.583 "num_base_bdevs_discovered": 4, 00:15:26.583 "num_base_bdevs_operational": 4, 00:15:26.583 "base_bdevs_list": [ 00:15:26.583 { 00:15:26.583 "name": "BaseBdev1", 00:15:26.583 "uuid": "53ee3a23-c80b-5abf-ae1b-e754e9fe5023", 00:15:26.583 "is_configured": true, 00:15:26.583 "data_offset": 0, 00:15:26.583 "data_size": 65536 00:15:26.583 }, 00:15:26.583 { 00:15:26.583 "name": "BaseBdev2", 00:15:26.583 "uuid": "c44f35ea-3820-52da-86b9-fa8d5081f8c6", 00:15:26.583 "is_configured": true, 00:15:26.583 "data_offset": 0, 00:15:26.583 "data_size": 65536 00:15:26.583 }, 00:15:26.583 { 00:15:26.583 "name": "BaseBdev3", 00:15:26.583 "uuid": "ad188a8b-91e1-5745-887e-d53332277e43", 00:15:26.583 "is_configured": true, 00:15:26.583 "data_offset": 0, 00:15:26.583 "data_size": 65536 00:15:26.583 }, 00:15:26.583 { 00:15:26.583 "name": "BaseBdev4", 00:15:26.583 "uuid": "4d9f02b0-4bd3-5780-99b7-d59059e98a24", 00:15:26.583 "is_configured": true, 00:15:26.583 "data_offset": 0, 00:15:26.583 "data_size": 65536 00:15:26.583 } 00:15:26.583 ] 00:15:26.583 }' 00:15:26.583 14:15:03 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:26.583 14:15:03 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.150 14:15:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:27.150 14:15:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.150 14:15:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:27.150 14:15:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.150 [2024-11-27 14:15:04.170932] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:27.150 14:15:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.150 14:15:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:15:27.150 14:15:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:27.150 14:15:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.150 14:15:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.150 14:15:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:27.150 14:15:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.150 14:15:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:15:27.150 14:15:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:27.150 14:15:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:27.150 14:15:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:27.150 14:15:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:27.150 14:15:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:27.150 14:15:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:27.150 14:15:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:27.150 14:15:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:27.150 14:15:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:27.150 14:15:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:27.150 14:15:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:27.150 14:15:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:27.150 14:15:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:27.410 [2024-11-27 14:15:04.638697] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:27.410 /dev/nbd0 00:15:27.410 14:15:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:27.410 14:15:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:27.410 14:15:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:27.410 14:15:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:27.410 14:15:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:27.410 14:15:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:27.410 14:15:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:27.410 14:15:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:27.410 14:15:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:27.410 14:15:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:27.410 14:15:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:27.668 1+0 records in 00:15:27.668 1+0 records out 00:15:27.668 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000657591 s, 6.2 MB/s 00:15:27.668 14:15:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:27.668 14:15:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:27.668 14:15:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:27.668 14:15:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:27.668 14:15:04 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:27.668 14:15:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:27.669 14:15:04 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:27.669 14:15:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:27.669 14:15:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:27.669 14:15:04 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=65536 oflag=direct 00:15:37.679 65536+0 records in 00:15:37.679 65536+0 records out 00:15:37.679 33554432 bytes (34 MB, 32 MiB) copied, 8.77955 s, 3.8 MB/s 00:15:37.679 14:15:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:37.679 14:15:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:37.679 14:15:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:37.679 14:15:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:37.679 14:15:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:37.679 14:15:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:37.679 14:15:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:37.679 [2024-11-27 14:15:13.763363] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:37.679 14:15:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:37.679 14:15:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:37.679 14:15:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:37.679 14:15:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:37.679 14:15:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:37.679 14:15:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:37.679 14:15:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:37.679 14:15:13 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:37.679 14:15:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:37.679 14:15:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.679 14:15:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.679 [2024-11-27 14:15:13.779477] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:37.679 14:15:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.679 14:15:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:37.679 14:15:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:37.679 14:15:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:37.679 14:15:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:37.679 14:15:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:37.679 14:15:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:37.679 14:15:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:37.679 14:15:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:37.679 14:15:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:37.679 14:15:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:37.679 14:15:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:37.679 14:15:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:37.680 14:15:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.680 14:15:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.680 14:15:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.680 14:15:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:37.680 "name": "raid_bdev1", 00:15:37.680 "uuid": "8550a416-133d-40f4-bfce-11e8bfd8f91d", 00:15:37.680 "strip_size_kb": 0, 00:15:37.680 "state": "online", 00:15:37.680 "raid_level": "raid1", 00:15:37.680 "superblock": false, 00:15:37.680 "num_base_bdevs": 4, 00:15:37.680 "num_base_bdevs_discovered": 3, 00:15:37.680 "num_base_bdevs_operational": 3, 00:15:37.680 "base_bdevs_list": [ 00:15:37.680 { 00:15:37.680 "name": null, 00:15:37.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:37.680 "is_configured": false, 00:15:37.680 "data_offset": 0, 00:15:37.680 "data_size": 65536 00:15:37.680 }, 00:15:37.680 { 00:15:37.680 "name": "BaseBdev2", 00:15:37.680 "uuid": "c44f35ea-3820-52da-86b9-fa8d5081f8c6", 00:15:37.680 "is_configured": true, 00:15:37.680 "data_offset": 0, 00:15:37.680 "data_size": 65536 00:15:37.680 }, 00:15:37.680 { 00:15:37.680 "name": "BaseBdev3", 00:15:37.680 "uuid": "ad188a8b-91e1-5745-887e-d53332277e43", 00:15:37.680 "is_configured": true, 00:15:37.680 "data_offset": 0, 00:15:37.680 "data_size": 65536 00:15:37.680 }, 00:15:37.680 { 00:15:37.680 "name": "BaseBdev4", 00:15:37.680 "uuid": "4d9f02b0-4bd3-5780-99b7-d59059e98a24", 00:15:37.680 "is_configured": true, 00:15:37.680 "data_offset": 0, 00:15:37.680 "data_size": 65536 00:15:37.680 } 00:15:37.680 ] 00:15:37.680 }' 00:15:37.680 14:15:13 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:37.680 14:15:13 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.680 14:15:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:37.680 14:15:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.680 14:15:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:37.680 [2024-11-27 14:15:14.307617] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:37.680 [2024-11-27 14:15:14.321984] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09d70 00:15:37.680 14:15:14 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.680 14:15:14 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:37.680 [2024-11-27 14:15:14.324653] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:38.248 14:15:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:38.248 14:15:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:38.248 14:15:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:38.248 14:15:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:38.248 14:15:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:38.248 14:15:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.248 14:15:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.248 14:15:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.248 14:15:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.248 14:15:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.248 14:15:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:38.248 "name": "raid_bdev1", 00:15:38.248 "uuid": "8550a416-133d-40f4-bfce-11e8bfd8f91d", 00:15:38.248 "strip_size_kb": 0, 00:15:38.248 "state": "online", 00:15:38.248 "raid_level": "raid1", 00:15:38.248 "superblock": false, 00:15:38.248 "num_base_bdevs": 4, 00:15:38.248 "num_base_bdevs_discovered": 4, 00:15:38.248 "num_base_bdevs_operational": 4, 00:15:38.248 "process": { 00:15:38.248 "type": "rebuild", 00:15:38.248 "target": "spare", 00:15:38.248 "progress": { 00:15:38.248 "blocks": 20480, 00:15:38.248 "percent": 31 00:15:38.248 } 00:15:38.248 }, 00:15:38.248 "base_bdevs_list": [ 00:15:38.248 { 00:15:38.248 "name": "spare", 00:15:38.248 "uuid": "f3dd086b-9113-5147-83c8-df8a4e77e92f", 00:15:38.248 "is_configured": true, 00:15:38.248 "data_offset": 0, 00:15:38.248 "data_size": 65536 00:15:38.248 }, 00:15:38.248 { 00:15:38.248 "name": "BaseBdev2", 00:15:38.248 "uuid": "c44f35ea-3820-52da-86b9-fa8d5081f8c6", 00:15:38.248 "is_configured": true, 00:15:38.248 "data_offset": 0, 00:15:38.248 "data_size": 65536 00:15:38.248 }, 00:15:38.248 { 00:15:38.248 "name": "BaseBdev3", 00:15:38.248 "uuid": "ad188a8b-91e1-5745-887e-d53332277e43", 00:15:38.248 "is_configured": true, 00:15:38.248 "data_offset": 0, 00:15:38.248 "data_size": 65536 00:15:38.248 }, 00:15:38.248 { 00:15:38.248 "name": "BaseBdev4", 00:15:38.248 "uuid": "4d9f02b0-4bd3-5780-99b7-d59059e98a24", 00:15:38.248 "is_configured": true, 00:15:38.248 "data_offset": 0, 00:15:38.248 "data_size": 65536 00:15:38.248 } 00:15:38.248 ] 00:15:38.248 }' 00:15:38.248 14:15:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:38.248 14:15:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:38.248 14:15:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:38.248 14:15:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:38.248 14:15:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:38.248 14:15:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.248 14:15:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.248 [2024-11-27 14:15:15.497817] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:38.507 [2024-11-27 14:15:15.533812] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:38.507 [2024-11-27 14:15:15.533912] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:38.507 [2024-11-27 14:15:15.533940] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:38.507 [2024-11-27 14:15:15.533956] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:38.507 14:15:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.507 14:15:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:38.507 14:15:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:38.507 14:15:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:38.507 14:15:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:38.507 14:15:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:38.507 14:15:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:38.507 14:15:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:38.507 14:15:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:38.507 14:15:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:38.507 14:15:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:38.507 14:15:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:38.507 14:15:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.507 14:15:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.507 14:15:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:38.507 14:15:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.507 14:15:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:38.507 "name": "raid_bdev1", 00:15:38.507 "uuid": "8550a416-133d-40f4-bfce-11e8bfd8f91d", 00:15:38.507 "strip_size_kb": 0, 00:15:38.507 "state": "online", 00:15:38.507 "raid_level": "raid1", 00:15:38.507 "superblock": false, 00:15:38.507 "num_base_bdevs": 4, 00:15:38.507 "num_base_bdevs_discovered": 3, 00:15:38.507 "num_base_bdevs_operational": 3, 00:15:38.507 "base_bdevs_list": [ 00:15:38.507 { 00:15:38.507 "name": null, 00:15:38.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.507 "is_configured": false, 00:15:38.507 "data_offset": 0, 00:15:38.507 "data_size": 65536 00:15:38.507 }, 00:15:38.507 { 00:15:38.507 "name": "BaseBdev2", 00:15:38.507 "uuid": "c44f35ea-3820-52da-86b9-fa8d5081f8c6", 00:15:38.507 "is_configured": true, 00:15:38.507 "data_offset": 0, 00:15:38.507 "data_size": 65536 00:15:38.507 }, 00:15:38.507 { 00:15:38.507 "name": "BaseBdev3", 00:15:38.507 "uuid": "ad188a8b-91e1-5745-887e-d53332277e43", 00:15:38.507 "is_configured": true, 00:15:38.508 "data_offset": 0, 00:15:38.508 "data_size": 65536 00:15:38.508 }, 00:15:38.508 { 00:15:38.508 "name": "BaseBdev4", 00:15:38.508 "uuid": "4d9f02b0-4bd3-5780-99b7-d59059e98a24", 00:15:38.508 "is_configured": true, 00:15:38.508 "data_offset": 0, 00:15:38.508 "data_size": 65536 00:15:38.508 } 00:15:38.508 ] 00:15:38.508 }' 00:15:38.508 14:15:15 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:38.508 14:15:15 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.074 14:15:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:39.074 14:15:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:39.074 14:15:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:39.074 14:15:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:39.074 14:15:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:39.074 14:15:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:39.074 14:15:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:39.074 14:15:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.074 14:15:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.074 14:15:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.074 14:15:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:39.074 "name": "raid_bdev1", 00:15:39.074 "uuid": "8550a416-133d-40f4-bfce-11e8bfd8f91d", 00:15:39.074 "strip_size_kb": 0, 00:15:39.074 "state": "online", 00:15:39.074 "raid_level": "raid1", 00:15:39.074 "superblock": false, 00:15:39.074 "num_base_bdevs": 4, 00:15:39.074 "num_base_bdevs_discovered": 3, 00:15:39.074 "num_base_bdevs_operational": 3, 00:15:39.074 "base_bdevs_list": [ 00:15:39.074 { 00:15:39.074 "name": null, 00:15:39.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:39.074 "is_configured": false, 00:15:39.074 "data_offset": 0, 00:15:39.074 "data_size": 65536 00:15:39.074 }, 00:15:39.074 { 00:15:39.074 "name": "BaseBdev2", 00:15:39.074 "uuid": "c44f35ea-3820-52da-86b9-fa8d5081f8c6", 00:15:39.074 "is_configured": true, 00:15:39.074 "data_offset": 0, 00:15:39.074 "data_size": 65536 00:15:39.074 }, 00:15:39.074 { 00:15:39.074 "name": "BaseBdev3", 00:15:39.074 "uuid": "ad188a8b-91e1-5745-887e-d53332277e43", 00:15:39.074 "is_configured": true, 00:15:39.074 "data_offset": 0, 00:15:39.074 "data_size": 65536 00:15:39.074 }, 00:15:39.074 { 00:15:39.074 "name": "BaseBdev4", 00:15:39.074 "uuid": "4d9f02b0-4bd3-5780-99b7-d59059e98a24", 00:15:39.074 "is_configured": true, 00:15:39.074 "data_offset": 0, 00:15:39.074 "data_size": 65536 00:15:39.074 } 00:15:39.075 ] 00:15:39.075 }' 00:15:39.075 14:15:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:39.075 14:15:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:39.075 14:15:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:39.075 14:15:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:39.075 14:15:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:39.075 14:15:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.075 14:15:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.075 [2024-11-27 14:15:16.221677] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:39.075 [2024-11-27 14:15:16.235164] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000d09e40 00:15:39.075 14:15:16 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.075 14:15:16 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:15:39.075 [2024-11-27 14:15:16.237714] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:40.008 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:40.008 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.008 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:40.008 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:40.008 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.008 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.008 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.008 14:15:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.008 14:15:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.008 14:15:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.266 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.266 "name": "raid_bdev1", 00:15:40.266 "uuid": "8550a416-133d-40f4-bfce-11e8bfd8f91d", 00:15:40.266 "strip_size_kb": 0, 00:15:40.266 "state": "online", 00:15:40.266 "raid_level": "raid1", 00:15:40.266 "superblock": false, 00:15:40.266 "num_base_bdevs": 4, 00:15:40.266 "num_base_bdevs_discovered": 4, 00:15:40.266 "num_base_bdevs_operational": 4, 00:15:40.266 "process": { 00:15:40.266 "type": "rebuild", 00:15:40.266 "target": "spare", 00:15:40.266 "progress": { 00:15:40.266 "blocks": 20480, 00:15:40.266 "percent": 31 00:15:40.266 } 00:15:40.266 }, 00:15:40.266 "base_bdevs_list": [ 00:15:40.266 { 00:15:40.266 "name": "spare", 00:15:40.266 "uuid": "f3dd086b-9113-5147-83c8-df8a4e77e92f", 00:15:40.266 "is_configured": true, 00:15:40.266 "data_offset": 0, 00:15:40.266 "data_size": 65536 00:15:40.266 }, 00:15:40.266 { 00:15:40.266 "name": "BaseBdev2", 00:15:40.266 "uuid": "c44f35ea-3820-52da-86b9-fa8d5081f8c6", 00:15:40.266 "is_configured": true, 00:15:40.266 "data_offset": 0, 00:15:40.266 "data_size": 65536 00:15:40.266 }, 00:15:40.266 { 00:15:40.266 "name": "BaseBdev3", 00:15:40.266 "uuid": "ad188a8b-91e1-5745-887e-d53332277e43", 00:15:40.266 "is_configured": true, 00:15:40.266 "data_offset": 0, 00:15:40.266 "data_size": 65536 00:15:40.266 }, 00:15:40.266 { 00:15:40.266 "name": "BaseBdev4", 00:15:40.266 "uuid": "4d9f02b0-4bd3-5780-99b7-d59059e98a24", 00:15:40.266 "is_configured": true, 00:15:40.266 "data_offset": 0, 00:15:40.267 "data_size": 65536 00:15:40.267 } 00:15:40.267 ] 00:15:40.267 }' 00:15:40.267 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.267 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:40.267 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.267 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:40.267 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:15:40.267 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:15:40.267 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:15:40.267 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:15:40.267 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:15:40.267 14:15:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.267 14:15:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.267 [2024-11-27 14:15:17.406809] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:40.267 [2024-11-27 14:15:17.446789] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000d09e40 00:15:40.267 14:15:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.267 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:15:40.267 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:15:40.267 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:40.267 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.267 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:40.267 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:40.267 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.267 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.267 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.267 14:15:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.267 14:15:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.267 14:15:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.267 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.267 "name": "raid_bdev1", 00:15:40.267 "uuid": "8550a416-133d-40f4-bfce-11e8bfd8f91d", 00:15:40.267 "strip_size_kb": 0, 00:15:40.267 "state": "online", 00:15:40.267 "raid_level": "raid1", 00:15:40.267 "superblock": false, 00:15:40.267 "num_base_bdevs": 4, 00:15:40.267 "num_base_bdevs_discovered": 3, 00:15:40.267 "num_base_bdevs_operational": 3, 00:15:40.267 "process": { 00:15:40.267 "type": "rebuild", 00:15:40.267 "target": "spare", 00:15:40.267 "progress": { 00:15:40.267 "blocks": 24576, 00:15:40.267 "percent": 37 00:15:40.267 } 00:15:40.267 }, 00:15:40.267 "base_bdevs_list": [ 00:15:40.267 { 00:15:40.267 "name": "spare", 00:15:40.267 "uuid": "f3dd086b-9113-5147-83c8-df8a4e77e92f", 00:15:40.267 "is_configured": true, 00:15:40.267 "data_offset": 0, 00:15:40.267 "data_size": 65536 00:15:40.267 }, 00:15:40.267 { 00:15:40.267 "name": null, 00:15:40.267 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.267 "is_configured": false, 00:15:40.267 "data_offset": 0, 00:15:40.267 "data_size": 65536 00:15:40.267 }, 00:15:40.267 { 00:15:40.267 "name": "BaseBdev3", 00:15:40.267 "uuid": "ad188a8b-91e1-5745-887e-d53332277e43", 00:15:40.267 "is_configured": true, 00:15:40.267 "data_offset": 0, 00:15:40.267 "data_size": 65536 00:15:40.267 }, 00:15:40.267 { 00:15:40.267 "name": "BaseBdev4", 00:15:40.267 "uuid": "4d9f02b0-4bd3-5780-99b7-d59059e98a24", 00:15:40.267 "is_configured": true, 00:15:40.267 "data_offset": 0, 00:15:40.267 "data_size": 65536 00:15:40.267 } 00:15:40.267 ] 00:15:40.267 }' 00:15:40.267 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.592 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:40.592 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.592 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:40.592 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=484 00:15:40.592 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:40.592 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:40.592 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:40.592 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:40.592 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:40.592 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:40.592 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:40.592 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:40.592 14:15:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.592 14:15:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.592 14:15:17 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.592 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:40.592 "name": "raid_bdev1", 00:15:40.592 "uuid": "8550a416-133d-40f4-bfce-11e8bfd8f91d", 00:15:40.592 "strip_size_kb": 0, 00:15:40.592 "state": "online", 00:15:40.592 "raid_level": "raid1", 00:15:40.592 "superblock": false, 00:15:40.592 "num_base_bdevs": 4, 00:15:40.592 "num_base_bdevs_discovered": 3, 00:15:40.592 "num_base_bdevs_operational": 3, 00:15:40.592 "process": { 00:15:40.592 "type": "rebuild", 00:15:40.592 "target": "spare", 00:15:40.592 "progress": { 00:15:40.592 "blocks": 26624, 00:15:40.592 "percent": 40 00:15:40.592 } 00:15:40.592 }, 00:15:40.592 "base_bdevs_list": [ 00:15:40.592 { 00:15:40.592 "name": "spare", 00:15:40.592 "uuid": "f3dd086b-9113-5147-83c8-df8a4e77e92f", 00:15:40.592 "is_configured": true, 00:15:40.592 "data_offset": 0, 00:15:40.592 "data_size": 65536 00:15:40.592 }, 00:15:40.592 { 00:15:40.592 "name": null, 00:15:40.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.592 "is_configured": false, 00:15:40.592 "data_offset": 0, 00:15:40.592 "data_size": 65536 00:15:40.592 }, 00:15:40.592 { 00:15:40.592 "name": "BaseBdev3", 00:15:40.592 "uuid": "ad188a8b-91e1-5745-887e-d53332277e43", 00:15:40.592 "is_configured": true, 00:15:40.592 "data_offset": 0, 00:15:40.592 "data_size": 65536 00:15:40.592 }, 00:15:40.592 { 00:15:40.592 "name": "BaseBdev4", 00:15:40.592 "uuid": "4d9f02b0-4bd3-5780-99b7-d59059e98a24", 00:15:40.592 "is_configured": true, 00:15:40.592 "data_offset": 0, 00:15:40.592 "data_size": 65536 00:15:40.592 } 00:15:40.592 ] 00:15:40.592 }' 00:15:40.592 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:40.592 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:40.592 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:40.592 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:40.592 14:15:17 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:41.547 14:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:41.547 14:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:41.547 14:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:41.547 14:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:41.547 14:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:41.547 14:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:41.547 14:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:41.547 14:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:41.547 14:15:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.547 14:15:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.547 14:15:18 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.547 14:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:41.547 "name": "raid_bdev1", 00:15:41.547 "uuid": "8550a416-133d-40f4-bfce-11e8bfd8f91d", 00:15:41.547 "strip_size_kb": 0, 00:15:41.547 "state": "online", 00:15:41.547 "raid_level": "raid1", 00:15:41.547 "superblock": false, 00:15:41.547 "num_base_bdevs": 4, 00:15:41.547 "num_base_bdevs_discovered": 3, 00:15:41.547 "num_base_bdevs_operational": 3, 00:15:41.547 "process": { 00:15:41.547 "type": "rebuild", 00:15:41.547 "target": "spare", 00:15:41.547 "progress": { 00:15:41.547 "blocks": 51200, 00:15:41.547 "percent": 78 00:15:41.547 } 00:15:41.547 }, 00:15:41.547 "base_bdevs_list": [ 00:15:41.547 { 00:15:41.547 "name": "spare", 00:15:41.547 "uuid": "f3dd086b-9113-5147-83c8-df8a4e77e92f", 00:15:41.547 "is_configured": true, 00:15:41.547 "data_offset": 0, 00:15:41.547 "data_size": 65536 00:15:41.547 }, 00:15:41.547 { 00:15:41.547 "name": null, 00:15:41.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.547 "is_configured": false, 00:15:41.547 "data_offset": 0, 00:15:41.547 "data_size": 65536 00:15:41.547 }, 00:15:41.547 { 00:15:41.547 "name": "BaseBdev3", 00:15:41.547 "uuid": "ad188a8b-91e1-5745-887e-d53332277e43", 00:15:41.547 "is_configured": true, 00:15:41.547 "data_offset": 0, 00:15:41.547 "data_size": 65536 00:15:41.547 }, 00:15:41.547 { 00:15:41.547 "name": "BaseBdev4", 00:15:41.547 "uuid": "4d9f02b0-4bd3-5780-99b7-d59059e98a24", 00:15:41.547 "is_configured": true, 00:15:41.547 "data_offset": 0, 00:15:41.547 "data_size": 65536 00:15:41.547 } 00:15:41.547 ] 00:15:41.547 }' 00:15:41.547 14:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:41.806 14:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:41.806 14:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:41.806 14:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:41.806 14:15:18 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:15:42.373 [2024-11-27 14:15:19.461790] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:15:42.373 [2024-11-27 14:15:19.461891] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:15:42.373 [2024-11-27 14:15:19.461956] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:42.940 14:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:15:42.940 14:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:42.940 14:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.940 14:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:42.940 14:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:42.940 14:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.940 14:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.940 14:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.940 14:15:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.940 14:15:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.940 14:15:19 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.940 14:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.940 "name": "raid_bdev1", 00:15:42.940 "uuid": "8550a416-133d-40f4-bfce-11e8bfd8f91d", 00:15:42.940 "strip_size_kb": 0, 00:15:42.940 "state": "online", 00:15:42.940 "raid_level": "raid1", 00:15:42.940 "superblock": false, 00:15:42.940 "num_base_bdevs": 4, 00:15:42.940 "num_base_bdevs_discovered": 3, 00:15:42.940 "num_base_bdevs_operational": 3, 00:15:42.940 "base_bdevs_list": [ 00:15:42.940 { 00:15:42.940 "name": "spare", 00:15:42.940 "uuid": "f3dd086b-9113-5147-83c8-df8a4e77e92f", 00:15:42.940 "is_configured": true, 00:15:42.940 "data_offset": 0, 00:15:42.940 "data_size": 65536 00:15:42.940 }, 00:15:42.940 { 00:15:42.940 "name": null, 00:15:42.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.941 "is_configured": false, 00:15:42.941 "data_offset": 0, 00:15:42.941 "data_size": 65536 00:15:42.941 }, 00:15:42.941 { 00:15:42.941 "name": "BaseBdev3", 00:15:42.941 "uuid": "ad188a8b-91e1-5745-887e-d53332277e43", 00:15:42.941 "is_configured": true, 00:15:42.941 "data_offset": 0, 00:15:42.941 "data_size": 65536 00:15:42.941 }, 00:15:42.941 { 00:15:42.941 "name": "BaseBdev4", 00:15:42.941 "uuid": "4d9f02b0-4bd3-5780-99b7-d59059e98a24", 00:15:42.941 "is_configured": true, 00:15:42.941 "data_offset": 0, 00:15:42.941 "data_size": 65536 00:15:42.941 } 00:15:42.941 ] 00:15:42.941 }' 00:15:42.941 14:15:19 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.941 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:15:42.941 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:42.941 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:15:42.941 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:15:42.941 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:15:42.941 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:42.941 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:15:42.941 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:15:42.941 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:42.941 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:42.941 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:42.941 14:15:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.941 14:15:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.941 14:15:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.941 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:42.941 "name": "raid_bdev1", 00:15:42.941 "uuid": "8550a416-133d-40f4-bfce-11e8bfd8f91d", 00:15:42.941 "strip_size_kb": 0, 00:15:42.941 "state": "online", 00:15:42.941 "raid_level": "raid1", 00:15:42.941 "superblock": false, 00:15:42.941 "num_base_bdevs": 4, 00:15:42.941 "num_base_bdevs_discovered": 3, 00:15:42.941 "num_base_bdevs_operational": 3, 00:15:42.941 "base_bdevs_list": [ 00:15:42.941 { 00:15:42.941 "name": "spare", 00:15:42.941 "uuid": "f3dd086b-9113-5147-83c8-df8a4e77e92f", 00:15:42.941 "is_configured": true, 00:15:42.941 "data_offset": 0, 00:15:42.941 "data_size": 65536 00:15:42.941 }, 00:15:42.941 { 00:15:42.941 "name": null, 00:15:42.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.941 "is_configured": false, 00:15:42.941 "data_offset": 0, 00:15:42.941 "data_size": 65536 00:15:42.941 }, 00:15:42.941 { 00:15:42.941 "name": "BaseBdev3", 00:15:42.941 "uuid": "ad188a8b-91e1-5745-887e-d53332277e43", 00:15:42.941 "is_configured": true, 00:15:42.941 "data_offset": 0, 00:15:42.941 "data_size": 65536 00:15:42.941 }, 00:15:42.941 { 00:15:42.941 "name": "BaseBdev4", 00:15:42.941 "uuid": "4d9f02b0-4bd3-5780-99b7-d59059e98a24", 00:15:42.941 "is_configured": true, 00:15:42.941 "data_offset": 0, 00:15:42.941 "data_size": 65536 00:15:42.941 } 00:15:42.941 ] 00:15:42.941 }' 00:15:42.941 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:42.941 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:15:42.941 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:43.200 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:15:43.200 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:43.200 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:43.200 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:43.200 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:43.200 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:43.200 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:43.200 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:43.200 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:43.200 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:43.200 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:43.200 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.200 14:15:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.200 14:15:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.200 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:43.200 14:15:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.200 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:43.200 "name": "raid_bdev1", 00:15:43.200 "uuid": "8550a416-133d-40f4-bfce-11e8bfd8f91d", 00:15:43.200 "strip_size_kb": 0, 00:15:43.200 "state": "online", 00:15:43.200 "raid_level": "raid1", 00:15:43.200 "superblock": false, 00:15:43.200 "num_base_bdevs": 4, 00:15:43.200 "num_base_bdevs_discovered": 3, 00:15:43.200 "num_base_bdevs_operational": 3, 00:15:43.200 "base_bdevs_list": [ 00:15:43.200 { 00:15:43.200 "name": "spare", 00:15:43.200 "uuid": "f3dd086b-9113-5147-83c8-df8a4e77e92f", 00:15:43.200 "is_configured": true, 00:15:43.200 "data_offset": 0, 00:15:43.200 "data_size": 65536 00:15:43.200 }, 00:15:43.200 { 00:15:43.200 "name": null, 00:15:43.200 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.200 "is_configured": false, 00:15:43.200 "data_offset": 0, 00:15:43.200 "data_size": 65536 00:15:43.200 }, 00:15:43.200 { 00:15:43.200 "name": "BaseBdev3", 00:15:43.200 "uuid": "ad188a8b-91e1-5745-887e-d53332277e43", 00:15:43.200 "is_configured": true, 00:15:43.200 "data_offset": 0, 00:15:43.200 "data_size": 65536 00:15:43.200 }, 00:15:43.200 { 00:15:43.200 "name": "BaseBdev4", 00:15:43.200 "uuid": "4d9f02b0-4bd3-5780-99b7-d59059e98a24", 00:15:43.200 "is_configured": true, 00:15:43.200 "data_offset": 0, 00:15:43.200 "data_size": 65536 00:15:43.200 } 00:15:43.200 ] 00:15:43.200 }' 00:15:43.200 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:43.200 14:15:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.767 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:15:43.767 14:15:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.767 14:15:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.767 [2024-11-27 14:15:20.801852] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:43.767 [2024-11-27 14:15:20.802054] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:43.767 [2024-11-27 14:15:20.802175] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:43.767 [2024-11-27 14:15:20.802286] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:43.767 [2024-11-27 14:15:20.802304] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:15:43.767 14:15:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.767 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:43.767 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:15:43.767 14:15:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.767 14:15:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.767 14:15:20 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.767 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:15:43.767 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:15:43.767 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:15:43.767 14:15:20 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:15:43.767 14:15:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:43.767 14:15:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:15:43.767 14:15:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:43.767 14:15:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:43.767 14:15:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:43.767 14:15:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:15:43.767 14:15:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:43.767 14:15:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:43.767 14:15:20 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:15:44.026 /dev/nbd0 00:15:44.026 14:15:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:44.026 14:15:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:44.026 14:15:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:44.026 14:15:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:44.026 14:15:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:44.026 14:15:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:44.026 14:15:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:44.026 14:15:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:44.026 14:15:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:44.026 14:15:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:44.026 14:15:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:44.026 1+0 records in 00:15:44.026 1+0 records out 00:15:44.026 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000569756 s, 7.2 MB/s 00:15:44.026 14:15:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.026 14:15:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:44.026 14:15:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.026 14:15:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:44.026 14:15:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:44.026 14:15:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:44.026 14:15:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:44.026 14:15:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:15:44.285 /dev/nbd1 00:15:44.285 14:15:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:44.285 14:15:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:44.285 14:15:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:44.285 14:15:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:15:44.285 14:15:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:44.285 14:15:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:44.285 14:15:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:44.285 14:15:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@877 -- # break 00:15:44.285 14:15:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:44.285 14:15:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:44.285 14:15:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:44.285 1+0 records in 00:15:44.285 1+0 records out 00:15:44.285 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000384615 s, 10.6 MB/s 00:15:44.285 14:15:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.543 14:15:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:15:44.543 14:15:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:44.543 14:15:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:44.543 14:15:21 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:15:44.543 14:15:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:44.543 14:15:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:15:44.543 14:15:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:15:44.543 14:15:21 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:15:44.543 14:15:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:44.543 14:15:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:15:44.543 14:15:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:44.543 14:15:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:15:44.543 14:15:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:44.543 14:15:21 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:44.802 14:15:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:44.802 14:15:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:44.802 14:15:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:44.802 14:15:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:44.802 14:15:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:44.802 14:15:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:44.802 14:15:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:44.802 14:15:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:44.802 14:15:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:44.802 14:15:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:15:45.370 14:15:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:45.370 14:15:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:45.370 14:15:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:45.370 14:15:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:45.370 14:15:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:45.370 14:15:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:45.370 14:15:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:15:45.370 14:15:22 bdev_raid.raid_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:15:45.370 14:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:15:45.370 14:15:22 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 77689 00:15:45.370 14:15:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 77689 ']' 00:15:45.370 14:15:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 77689 00:15:45.370 14:15:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:15:45.370 14:15:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:45.370 14:15:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77689 00:15:45.370 14:15:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:45.370 killing process with pid 77689 00:15:45.370 14:15:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:45.370 14:15:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77689' 00:15:45.370 14:15:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@973 -- # kill 77689 00:15:45.370 Received shutdown signal, test time was about 60.000000 seconds 00:15:45.370 00:15:45.370 Latency(us) 00:15:45.370 [2024-11-27T14:15:22.648Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:45.370 [2024-11-27T14:15:22.648Z] =================================================================================================================== 00:15:45.370 [2024-11-27T14:15:22.648Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:15:45.370 14:15:22 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@978 -- # wait 77689 00:15:45.370 [2024-11-27 14:15:22.381766] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:45.628 [2024-11-27 14:15:22.821353] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:15:47.004 00:15:47.004 real 0m21.641s 00:15:47.004 user 0m24.454s 00:15:47.004 sys 0m3.687s 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:15:47.004 ************************************ 00:15:47.004 END TEST raid_rebuild_test 00:15:47.004 ************************************ 00:15:47.004 14:15:23 bdev_raid -- bdev/bdev_raid.sh@979 -- # run_test raid_rebuild_test_sb raid_rebuild_test raid1 4 true false true 00:15:47.004 14:15:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:47.004 14:15:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:47.004 14:15:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:47.004 ************************************ 00:15:47.004 START TEST raid_rebuild_test_sb 00:15:47.004 ************************************ 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true false true 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=78176 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 78176 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 78176 ']' 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:47.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:47.004 14:15:23 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:47.004 [2024-11-27 14:15:24.068347] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:15:47.004 I/O size of 3145728 is greater than zero copy threshold (65536). 00:15:47.004 Zero copy mechanism will not be used. 00:15:47.004 [2024-11-27 14:15:24.068523] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78176 ] 00:15:47.004 [2024-11-27 14:15:24.272106] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:47.262 [2024-11-27 14:15:24.406388] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:47.520 [2024-11-27 14:15:24.609158] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:47.520 [2024-11-27 14:15:24.609238] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.088 BaseBdev1_malloc 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.088 [2024-11-27 14:15:25.122786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:15:48.088 [2024-11-27 14:15:25.122863] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.088 [2024-11-27 14:15:25.122904] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:15:48.088 [2024-11-27 14:15:25.122925] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.088 [2024-11-27 14:15:25.125733] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.088 [2024-11-27 14:15:25.125802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:48.088 BaseBdev1 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.088 BaseBdev2_malloc 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.088 [2024-11-27 14:15:25.179211] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:15:48.088 [2024-11-27 14:15:25.179289] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.088 [2024-11-27 14:15:25.179323] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:15:48.088 [2024-11-27 14:15:25.179341] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.088 [2024-11-27 14:15:25.182091] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.088 [2024-11-27 14:15:25.182141] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:48.088 BaseBdev2 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.088 BaseBdev3_malloc 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.088 [2024-11-27 14:15:25.241621] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:15:48.088 [2024-11-27 14:15:25.241692] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.088 [2024-11-27 14:15:25.241724] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:15:48.088 [2024-11-27 14:15:25.241744] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.088 [2024-11-27 14:15:25.244485] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.088 [2024-11-27 14:15:25.244539] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:48.088 BaseBdev3 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.088 BaseBdev4_malloc 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.088 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:15:48.089 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.089 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.089 [2024-11-27 14:15:25.293785] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:15:48.089 [2024-11-27 14:15:25.293878] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.089 [2024-11-27 14:15:25.293909] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:15:48.089 [2024-11-27 14:15:25.293927] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.089 [2024-11-27 14:15:25.296601] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.089 [2024-11-27 14:15:25.296654] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:48.089 BaseBdev4 00:15:48.089 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.089 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:15:48.089 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.089 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.089 spare_malloc 00:15:48.089 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.089 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:15:48.089 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.089 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.089 spare_delay 00:15:48.089 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.089 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:15:48.089 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.089 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.089 [2024-11-27 14:15:25.353934] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:15:48.089 [2024-11-27 14:15:25.354006] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:48.089 [2024-11-27 14:15:25.354034] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:15:48.089 [2024-11-27 14:15:25.354052] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:48.089 [2024-11-27 14:15:25.356848] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:48.089 [2024-11-27 14:15:25.356900] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:15:48.089 spare 00:15:48.089 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.089 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:15:48.089 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.089 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.089 [2024-11-27 14:15:25.361986] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:48.347 [2024-11-27 14:15:25.364408] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:48.347 [2024-11-27 14:15:25.364504] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:48.347 [2024-11-27 14:15:25.364591] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:48.347 [2024-11-27 14:15:25.364876] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:15:48.347 [2024-11-27 14:15:25.364912] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:48.347 [2024-11-27 14:15:25.365240] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:15:48.347 [2024-11-27 14:15:25.365486] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:15:48.347 [2024-11-27 14:15:25.365513] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:15:48.347 [2024-11-27 14:15:25.365713] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:48.347 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.347 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:15:48.347 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:48.347 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:48.347 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:48.347 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:48.347 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:15:48.347 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:48.347 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:48.347 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:48.347 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:48.347 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.347 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:48.347 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.347 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.347 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.347 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:48.347 "name": "raid_bdev1", 00:15:48.347 "uuid": "f7950938-d0c3-4d28-af26-6aa5e68dd81d", 00:15:48.347 "strip_size_kb": 0, 00:15:48.347 "state": "online", 00:15:48.347 "raid_level": "raid1", 00:15:48.347 "superblock": true, 00:15:48.347 "num_base_bdevs": 4, 00:15:48.347 "num_base_bdevs_discovered": 4, 00:15:48.347 "num_base_bdevs_operational": 4, 00:15:48.347 "base_bdevs_list": [ 00:15:48.347 { 00:15:48.347 "name": "BaseBdev1", 00:15:48.347 "uuid": "f58ae29c-bd9b-5233-a9a3-b1470b48397f", 00:15:48.347 "is_configured": true, 00:15:48.347 "data_offset": 2048, 00:15:48.347 "data_size": 63488 00:15:48.347 }, 00:15:48.347 { 00:15:48.347 "name": "BaseBdev2", 00:15:48.347 "uuid": "45f87439-c59c-540a-8297-1d489969e7bd", 00:15:48.347 "is_configured": true, 00:15:48.347 "data_offset": 2048, 00:15:48.347 "data_size": 63488 00:15:48.347 }, 00:15:48.347 { 00:15:48.347 "name": "BaseBdev3", 00:15:48.347 "uuid": "85e49659-b658-5bbb-8b49-97810c543cce", 00:15:48.347 "is_configured": true, 00:15:48.347 "data_offset": 2048, 00:15:48.347 "data_size": 63488 00:15:48.347 }, 00:15:48.347 { 00:15:48.347 "name": "BaseBdev4", 00:15:48.347 "uuid": "c773a5d3-3b64-5f27-b674-52253499a352", 00:15:48.347 "is_configured": true, 00:15:48.347 "data_offset": 2048, 00:15:48.347 "data_size": 63488 00:15:48.347 } 00:15:48.347 ] 00:15:48.347 }' 00:15:48.347 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:48.347 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.605 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:15:48.605 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:15:48.605 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.605 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.864 [2024-11-27 14:15:25.886546] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:48.864 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.864 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:15:48.864 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:48.864 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.864 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.864 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:15:48.864 14:15:25 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.864 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:15:48.864 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:15:48.864 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:15:48.864 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:15:48.864 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:15:48.864 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:15:48.864 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:15:48.864 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:48.864 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:15:48.864 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:48.864 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:15:48.864 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:48.864 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:48.864 14:15:25 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:15:49.122 [2024-11-27 14:15:26.258284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:15:49.122 /dev/nbd0 00:15:49.122 14:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:49.122 14:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:49.122 14:15:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:49.122 14:15:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:15:49.122 14:15:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:49.122 14:15:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:49.122 14:15:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:49.122 14:15:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:15:49.122 14:15:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:49.122 14:15:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:49.122 14:15:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:49.122 1+0 records in 00:15:49.122 1+0 records out 00:15:49.122 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000255903 s, 16.0 MB/s 00:15:49.122 14:15:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:49.122 14:15:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:15:49.122 14:15:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:49.123 14:15:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:49.123 14:15:26 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:15:49.123 14:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:49.123 14:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:15:49.123 14:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:15:49.123 14:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:15:49.123 14:15:26 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=512 count=63488 oflag=direct 00:15:59.175 63488+0 records in 00:15:59.175 63488+0 records out 00:15:59.175 32505856 bytes (33 MB, 31 MiB) copied, 8.35643 s, 3.9 MB/s 00:15:59.175 14:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:15:59.175 14:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:15:59.175 14:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:59.175 14:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:59.175 14:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:15:59.175 14:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:59.175 14:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:15:59.175 [2024-11-27 14:15:34.959004] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.175 14:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:59.175 14:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:59.175 14:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:59.175 14:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:59.175 14:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:59.175 14:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:59.175 14:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:15:59.175 14:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:15:59.175 14:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:15:59.175 14:15:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.175 14:15:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.175 [2024-11-27 14:15:34.987154] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:59.175 14:15:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.175 14:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:59.175 14:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.175 14:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.175 14:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.175 14:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.175 14:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:59.175 14:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.175 14:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.175 14:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.175 14:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.175 14:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.175 14:15:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.175 14:15:34 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.175 14:15:34 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.175 14:15:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.175 14:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.175 "name": "raid_bdev1", 00:15:59.175 "uuid": "f7950938-d0c3-4d28-af26-6aa5e68dd81d", 00:15:59.175 "strip_size_kb": 0, 00:15:59.175 "state": "online", 00:15:59.175 "raid_level": "raid1", 00:15:59.175 "superblock": true, 00:15:59.175 "num_base_bdevs": 4, 00:15:59.175 "num_base_bdevs_discovered": 3, 00:15:59.175 "num_base_bdevs_operational": 3, 00:15:59.175 "base_bdevs_list": [ 00:15:59.175 { 00:15:59.175 "name": null, 00:15:59.175 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.175 "is_configured": false, 00:15:59.176 "data_offset": 0, 00:15:59.176 "data_size": 63488 00:15:59.176 }, 00:15:59.176 { 00:15:59.176 "name": "BaseBdev2", 00:15:59.176 "uuid": "45f87439-c59c-540a-8297-1d489969e7bd", 00:15:59.176 "is_configured": true, 00:15:59.176 "data_offset": 2048, 00:15:59.176 "data_size": 63488 00:15:59.176 }, 00:15:59.176 { 00:15:59.176 "name": "BaseBdev3", 00:15:59.176 "uuid": "85e49659-b658-5bbb-8b49-97810c543cce", 00:15:59.176 "is_configured": true, 00:15:59.176 "data_offset": 2048, 00:15:59.176 "data_size": 63488 00:15:59.176 }, 00:15:59.176 { 00:15:59.176 "name": "BaseBdev4", 00:15:59.176 "uuid": "c773a5d3-3b64-5f27-b674-52253499a352", 00:15:59.176 "is_configured": true, 00:15:59.176 "data_offset": 2048, 00:15:59.176 "data_size": 63488 00:15:59.176 } 00:15:59.176 ] 00:15:59.176 }' 00:15:59.176 14:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.176 14:15:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.176 14:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:15:59.176 14:15:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.176 14:15:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.176 [2024-11-27 14:15:35.535272] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:15:59.176 [2024-11-27 14:15:35.549407] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca3500 00:15:59.176 14:15:35 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.176 14:15:35 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:15:59.176 [2024-11-27 14:15:35.551917] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:15:59.436 14:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:15:59.436 14:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:15:59.436 14:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:15:59.436 14:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:15:59.436 14:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:15:59.436 14:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.436 14:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.436 14:15:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.436 14:15:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.436 14:15:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.436 14:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:15:59.436 "name": "raid_bdev1", 00:15:59.436 "uuid": "f7950938-d0c3-4d28-af26-6aa5e68dd81d", 00:15:59.436 "strip_size_kb": 0, 00:15:59.436 "state": "online", 00:15:59.436 "raid_level": "raid1", 00:15:59.436 "superblock": true, 00:15:59.436 "num_base_bdevs": 4, 00:15:59.436 "num_base_bdevs_discovered": 4, 00:15:59.436 "num_base_bdevs_operational": 4, 00:15:59.436 "process": { 00:15:59.436 "type": "rebuild", 00:15:59.436 "target": "spare", 00:15:59.436 "progress": { 00:15:59.436 "blocks": 20480, 00:15:59.436 "percent": 32 00:15:59.436 } 00:15:59.437 }, 00:15:59.437 "base_bdevs_list": [ 00:15:59.437 { 00:15:59.437 "name": "spare", 00:15:59.437 "uuid": "86c7ee8e-283c-5efb-834c-313bcdcafb1e", 00:15:59.437 "is_configured": true, 00:15:59.437 "data_offset": 2048, 00:15:59.437 "data_size": 63488 00:15:59.437 }, 00:15:59.437 { 00:15:59.437 "name": "BaseBdev2", 00:15:59.437 "uuid": "45f87439-c59c-540a-8297-1d489969e7bd", 00:15:59.437 "is_configured": true, 00:15:59.437 "data_offset": 2048, 00:15:59.437 "data_size": 63488 00:15:59.437 }, 00:15:59.437 { 00:15:59.437 "name": "BaseBdev3", 00:15:59.437 "uuid": "85e49659-b658-5bbb-8b49-97810c543cce", 00:15:59.437 "is_configured": true, 00:15:59.437 "data_offset": 2048, 00:15:59.437 "data_size": 63488 00:15:59.437 }, 00:15:59.437 { 00:15:59.437 "name": "BaseBdev4", 00:15:59.437 "uuid": "c773a5d3-3b64-5f27-b674-52253499a352", 00:15:59.437 "is_configured": true, 00:15:59.437 "data_offset": 2048, 00:15:59.437 "data_size": 63488 00:15:59.437 } 00:15:59.437 ] 00:15:59.437 }' 00:15:59.437 14:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:15:59.437 14:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:15:59.437 14:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:15:59.695 14:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:15:59.695 14:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:15:59.695 14:15:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.695 14:15:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.695 [2024-11-27 14:15:36.728989] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:59.695 [2024-11-27 14:15:36.760975] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:15:59.695 [2024-11-27 14:15:36.761069] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:59.695 [2024-11-27 14:15:36.761098] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:15:59.695 [2024-11-27 14:15:36.761113] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:15:59.695 14:15:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.695 14:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:15:59.695 14:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:15:59.695 14:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:15:59.695 14:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:15:59.695 14:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:15:59.695 14:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:15:59.695 14:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:15:59.695 14:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:15:59.695 14:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:15:59.695 14:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:15:59.695 14:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:15:59.695 14:15:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.695 14:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:59.695 14:15:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.695 14:15:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.695 14:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:15:59.695 "name": "raid_bdev1", 00:15:59.695 "uuid": "f7950938-d0c3-4d28-af26-6aa5e68dd81d", 00:15:59.695 "strip_size_kb": 0, 00:15:59.695 "state": "online", 00:15:59.695 "raid_level": "raid1", 00:15:59.695 "superblock": true, 00:15:59.695 "num_base_bdevs": 4, 00:15:59.695 "num_base_bdevs_discovered": 3, 00:15:59.695 "num_base_bdevs_operational": 3, 00:15:59.695 "base_bdevs_list": [ 00:15:59.695 { 00:15:59.695 "name": null, 00:15:59.695 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.695 "is_configured": false, 00:15:59.695 "data_offset": 0, 00:15:59.695 "data_size": 63488 00:15:59.695 }, 00:15:59.695 { 00:15:59.695 "name": "BaseBdev2", 00:15:59.695 "uuid": "45f87439-c59c-540a-8297-1d489969e7bd", 00:15:59.695 "is_configured": true, 00:15:59.695 "data_offset": 2048, 00:15:59.695 "data_size": 63488 00:15:59.695 }, 00:15:59.695 { 00:15:59.695 "name": "BaseBdev3", 00:15:59.695 "uuid": "85e49659-b658-5bbb-8b49-97810c543cce", 00:15:59.695 "is_configured": true, 00:15:59.695 "data_offset": 2048, 00:15:59.695 "data_size": 63488 00:15:59.695 }, 00:15:59.695 { 00:15:59.695 "name": "BaseBdev4", 00:15:59.695 "uuid": "c773a5d3-3b64-5f27-b674-52253499a352", 00:15:59.695 "is_configured": true, 00:15:59.695 "data_offset": 2048, 00:15:59.695 "data_size": 63488 00:15:59.695 } 00:15:59.695 ] 00:15:59.695 }' 00:15:59.695 14:15:36 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:15:59.695 14:15:36 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.262 14:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:00.262 14:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:00.262 14:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:00.262 14:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:00.262 14:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:00.262 14:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:00.262 14:15:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.262 14:15:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.262 14:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:00.262 14:15:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.262 14:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:00.262 "name": "raid_bdev1", 00:16:00.262 "uuid": "f7950938-d0c3-4d28-af26-6aa5e68dd81d", 00:16:00.262 "strip_size_kb": 0, 00:16:00.262 "state": "online", 00:16:00.262 "raid_level": "raid1", 00:16:00.262 "superblock": true, 00:16:00.262 "num_base_bdevs": 4, 00:16:00.262 "num_base_bdevs_discovered": 3, 00:16:00.262 "num_base_bdevs_operational": 3, 00:16:00.262 "base_bdevs_list": [ 00:16:00.262 { 00:16:00.262 "name": null, 00:16:00.262 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:00.262 "is_configured": false, 00:16:00.262 "data_offset": 0, 00:16:00.262 "data_size": 63488 00:16:00.262 }, 00:16:00.262 { 00:16:00.262 "name": "BaseBdev2", 00:16:00.262 "uuid": "45f87439-c59c-540a-8297-1d489969e7bd", 00:16:00.263 "is_configured": true, 00:16:00.263 "data_offset": 2048, 00:16:00.263 "data_size": 63488 00:16:00.263 }, 00:16:00.263 { 00:16:00.263 "name": "BaseBdev3", 00:16:00.263 "uuid": "85e49659-b658-5bbb-8b49-97810c543cce", 00:16:00.263 "is_configured": true, 00:16:00.263 "data_offset": 2048, 00:16:00.263 "data_size": 63488 00:16:00.263 }, 00:16:00.263 { 00:16:00.263 "name": "BaseBdev4", 00:16:00.263 "uuid": "c773a5d3-3b64-5f27-b674-52253499a352", 00:16:00.263 "is_configured": true, 00:16:00.263 "data_offset": 2048, 00:16:00.263 "data_size": 63488 00:16:00.263 } 00:16:00.263 ] 00:16:00.263 }' 00:16:00.263 14:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:00.263 14:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:00.263 14:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:00.263 14:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:00.263 14:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:00.263 14:15:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.263 14:15:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:00.263 [2024-11-27 14:15:37.444999] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:00.263 [2024-11-27 14:15:37.458349] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000ca35d0 00:16:00.263 14:15:37 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.263 14:15:37 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:00.263 [2024-11-27 14:15:37.460903] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:01.198 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.198 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.198 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.198 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.199 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.199 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.199 14:15:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.199 14:15:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.199 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.458 14:15:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.458 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.458 "name": "raid_bdev1", 00:16:01.458 "uuid": "f7950938-d0c3-4d28-af26-6aa5e68dd81d", 00:16:01.458 "strip_size_kb": 0, 00:16:01.458 "state": "online", 00:16:01.458 "raid_level": "raid1", 00:16:01.458 "superblock": true, 00:16:01.458 "num_base_bdevs": 4, 00:16:01.458 "num_base_bdevs_discovered": 4, 00:16:01.458 "num_base_bdevs_operational": 4, 00:16:01.458 "process": { 00:16:01.458 "type": "rebuild", 00:16:01.458 "target": "spare", 00:16:01.458 "progress": { 00:16:01.458 "blocks": 20480, 00:16:01.458 "percent": 32 00:16:01.458 } 00:16:01.458 }, 00:16:01.458 "base_bdevs_list": [ 00:16:01.458 { 00:16:01.458 "name": "spare", 00:16:01.458 "uuid": "86c7ee8e-283c-5efb-834c-313bcdcafb1e", 00:16:01.458 "is_configured": true, 00:16:01.458 "data_offset": 2048, 00:16:01.458 "data_size": 63488 00:16:01.458 }, 00:16:01.458 { 00:16:01.458 "name": "BaseBdev2", 00:16:01.458 "uuid": "45f87439-c59c-540a-8297-1d489969e7bd", 00:16:01.458 "is_configured": true, 00:16:01.458 "data_offset": 2048, 00:16:01.458 "data_size": 63488 00:16:01.458 }, 00:16:01.458 { 00:16:01.458 "name": "BaseBdev3", 00:16:01.458 "uuid": "85e49659-b658-5bbb-8b49-97810c543cce", 00:16:01.458 "is_configured": true, 00:16:01.458 "data_offset": 2048, 00:16:01.458 "data_size": 63488 00:16:01.458 }, 00:16:01.458 { 00:16:01.458 "name": "BaseBdev4", 00:16:01.458 "uuid": "c773a5d3-3b64-5f27-b674-52253499a352", 00:16:01.458 "is_configured": true, 00:16:01.458 "data_offset": 2048, 00:16:01.458 "data_size": 63488 00:16:01.458 } 00:16:01.458 ] 00:16:01.458 }' 00:16:01.458 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.458 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.458 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.458 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.458 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:01.458 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:01.458 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:01.458 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:01.458 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:01.458 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:01.458 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:01.458 14:15:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.458 14:15:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.458 [2024-11-27 14:15:38.625865] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:01.717 [2024-11-27 14:15:38.769985] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000ca35d0 00:16:01.717 14:15:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.717 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:01.717 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:01.717 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.717 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.717 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.717 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.717 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.717 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.717 14:15:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.717 14:15:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.717 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.717 14:15:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.717 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.717 "name": "raid_bdev1", 00:16:01.717 "uuid": "f7950938-d0c3-4d28-af26-6aa5e68dd81d", 00:16:01.717 "strip_size_kb": 0, 00:16:01.717 "state": "online", 00:16:01.717 "raid_level": "raid1", 00:16:01.717 "superblock": true, 00:16:01.717 "num_base_bdevs": 4, 00:16:01.717 "num_base_bdevs_discovered": 3, 00:16:01.717 "num_base_bdevs_operational": 3, 00:16:01.717 "process": { 00:16:01.717 "type": "rebuild", 00:16:01.717 "target": "spare", 00:16:01.717 "progress": { 00:16:01.717 "blocks": 24576, 00:16:01.717 "percent": 38 00:16:01.717 } 00:16:01.718 }, 00:16:01.718 "base_bdevs_list": [ 00:16:01.718 { 00:16:01.718 "name": "spare", 00:16:01.718 "uuid": "86c7ee8e-283c-5efb-834c-313bcdcafb1e", 00:16:01.718 "is_configured": true, 00:16:01.718 "data_offset": 2048, 00:16:01.718 "data_size": 63488 00:16:01.718 }, 00:16:01.718 { 00:16:01.718 "name": null, 00:16:01.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.718 "is_configured": false, 00:16:01.718 "data_offset": 0, 00:16:01.718 "data_size": 63488 00:16:01.718 }, 00:16:01.718 { 00:16:01.718 "name": "BaseBdev3", 00:16:01.718 "uuid": "85e49659-b658-5bbb-8b49-97810c543cce", 00:16:01.718 "is_configured": true, 00:16:01.718 "data_offset": 2048, 00:16:01.718 "data_size": 63488 00:16:01.718 }, 00:16:01.718 { 00:16:01.718 "name": "BaseBdev4", 00:16:01.718 "uuid": "c773a5d3-3b64-5f27-b674-52253499a352", 00:16:01.718 "is_configured": true, 00:16:01.718 "data_offset": 2048, 00:16:01.718 "data_size": 63488 00:16:01.718 } 00:16:01.718 ] 00:16:01.718 }' 00:16:01.718 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.718 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.718 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.718 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.718 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=505 00:16:01.718 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:01.718 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:01.718 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:01.718 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:01.718 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:01.718 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:01.718 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:01.718 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:01.718 14:15:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.718 14:15:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:01.718 14:15:38 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.718 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:01.718 "name": "raid_bdev1", 00:16:01.718 "uuid": "f7950938-d0c3-4d28-af26-6aa5e68dd81d", 00:16:01.718 "strip_size_kb": 0, 00:16:01.718 "state": "online", 00:16:01.718 "raid_level": "raid1", 00:16:01.718 "superblock": true, 00:16:01.718 "num_base_bdevs": 4, 00:16:01.718 "num_base_bdevs_discovered": 3, 00:16:01.718 "num_base_bdevs_operational": 3, 00:16:01.718 "process": { 00:16:01.718 "type": "rebuild", 00:16:01.718 "target": "spare", 00:16:01.718 "progress": { 00:16:01.718 "blocks": 26624, 00:16:01.718 "percent": 41 00:16:01.718 } 00:16:01.718 }, 00:16:01.718 "base_bdevs_list": [ 00:16:01.718 { 00:16:01.718 "name": "spare", 00:16:01.718 "uuid": "86c7ee8e-283c-5efb-834c-313bcdcafb1e", 00:16:01.718 "is_configured": true, 00:16:01.718 "data_offset": 2048, 00:16:01.718 "data_size": 63488 00:16:01.718 }, 00:16:01.718 { 00:16:01.718 "name": null, 00:16:01.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:01.718 "is_configured": false, 00:16:01.718 "data_offset": 0, 00:16:01.718 "data_size": 63488 00:16:01.718 }, 00:16:01.718 { 00:16:01.718 "name": "BaseBdev3", 00:16:01.718 "uuid": "85e49659-b658-5bbb-8b49-97810c543cce", 00:16:01.718 "is_configured": true, 00:16:01.718 "data_offset": 2048, 00:16:01.718 "data_size": 63488 00:16:01.718 }, 00:16:01.718 { 00:16:01.718 "name": "BaseBdev4", 00:16:01.718 "uuid": "c773a5d3-3b64-5f27-b674-52253499a352", 00:16:01.718 "is_configured": true, 00:16:01.718 "data_offset": 2048, 00:16:01.718 "data_size": 63488 00:16:01.718 } 00:16:01.718 ] 00:16:01.718 }' 00:16:01.718 14:15:38 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:01.977 14:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:01.977 14:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:01.977 14:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:01.977 14:15:39 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:02.911 14:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:02.911 14:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:02.911 14:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:02.911 14:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:02.911 14:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:02.911 14:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:02.911 14:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:02.911 14:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.911 14:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:02.911 14:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:02.911 14:15:40 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.911 14:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:02.911 "name": "raid_bdev1", 00:16:02.912 "uuid": "f7950938-d0c3-4d28-af26-6aa5e68dd81d", 00:16:02.912 "strip_size_kb": 0, 00:16:02.912 "state": "online", 00:16:02.912 "raid_level": "raid1", 00:16:02.912 "superblock": true, 00:16:02.912 "num_base_bdevs": 4, 00:16:02.912 "num_base_bdevs_discovered": 3, 00:16:02.912 "num_base_bdevs_operational": 3, 00:16:02.912 "process": { 00:16:02.912 "type": "rebuild", 00:16:02.912 "target": "spare", 00:16:02.912 "progress": { 00:16:02.912 "blocks": 51200, 00:16:02.912 "percent": 80 00:16:02.912 } 00:16:02.912 }, 00:16:02.912 "base_bdevs_list": [ 00:16:02.912 { 00:16:02.912 "name": "spare", 00:16:02.912 "uuid": "86c7ee8e-283c-5efb-834c-313bcdcafb1e", 00:16:02.912 "is_configured": true, 00:16:02.912 "data_offset": 2048, 00:16:02.912 "data_size": 63488 00:16:02.912 }, 00:16:02.912 { 00:16:02.912 "name": null, 00:16:02.912 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:02.912 "is_configured": false, 00:16:02.912 "data_offset": 0, 00:16:02.912 "data_size": 63488 00:16:02.912 }, 00:16:02.912 { 00:16:02.912 "name": "BaseBdev3", 00:16:02.912 "uuid": "85e49659-b658-5bbb-8b49-97810c543cce", 00:16:02.912 "is_configured": true, 00:16:02.912 "data_offset": 2048, 00:16:02.912 "data_size": 63488 00:16:02.912 }, 00:16:02.912 { 00:16:02.912 "name": "BaseBdev4", 00:16:02.912 "uuid": "c773a5d3-3b64-5f27-b674-52253499a352", 00:16:02.912 "is_configured": true, 00:16:02.912 "data_offset": 2048, 00:16:02.912 "data_size": 63488 00:16:02.912 } 00:16:02.912 ] 00:16:02.912 }' 00:16:02.912 14:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:03.169 14:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:03.169 14:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:03.169 14:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:03.169 14:15:40 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:03.431 [2024-11-27 14:15:40.684183] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:03.431 [2024-11-27 14:15:40.684299] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:03.431 [2024-11-27 14:15:40.684485] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:04.024 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:04.024 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:04.024 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.024 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:04.024 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:04.024 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.024 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.025 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.025 14:15:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.025 14:15:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.025 14:15:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.282 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.282 "name": "raid_bdev1", 00:16:04.282 "uuid": "f7950938-d0c3-4d28-af26-6aa5e68dd81d", 00:16:04.282 "strip_size_kb": 0, 00:16:04.282 "state": "online", 00:16:04.282 "raid_level": "raid1", 00:16:04.282 "superblock": true, 00:16:04.282 "num_base_bdevs": 4, 00:16:04.282 "num_base_bdevs_discovered": 3, 00:16:04.282 "num_base_bdevs_operational": 3, 00:16:04.282 "base_bdevs_list": [ 00:16:04.282 { 00:16:04.282 "name": "spare", 00:16:04.283 "uuid": "86c7ee8e-283c-5efb-834c-313bcdcafb1e", 00:16:04.283 "is_configured": true, 00:16:04.283 "data_offset": 2048, 00:16:04.283 "data_size": 63488 00:16:04.283 }, 00:16:04.283 { 00:16:04.283 "name": null, 00:16:04.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.283 "is_configured": false, 00:16:04.283 "data_offset": 0, 00:16:04.283 "data_size": 63488 00:16:04.283 }, 00:16:04.283 { 00:16:04.283 "name": "BaseBdev3", 00:16:04.283 "uuid": "85e49659-b658-5bbb-8b49-97810c543cce", 00:16:04.283 "is_configured": true, 00:16:04.283 "data_offset": 2048, 00:16:04.283 "data_size": 63488 00:16:04.283 }, 00:16:04.283 { 00:16:04.283 "name": "BaseBdev4", 00:16:04.283 "uuid": "c773a5d3-3b64-5f27-b674-52253499a352", 00:16:04.283 "is_configured": true, 00:16:04.283 "data_offset": 2048, 00:16:04.283 "data_size": 63488 00:16:04.283 } 00:16:04.283 ] 00:16:04.283 }' 00:16:04.283 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.283 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:04.283 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.283 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:04.283 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:16:04.283 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:04.283 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:04.283 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:04.283 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:04.283 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:04.283 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.283 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.283 14:15:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.283 14:15:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.283 14:15:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.283 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:04.283 "name": "raid_bdev1", 00:16:04.283 "uuid": "f7950938-d0c3-4d28-af26-6aa5e68dd81d", 00:16:04.283 "strip_size_kb": 0, 00:16:04.283 "state": "online", 00:16:04.283 "raid_level": "raid1", 00:16:04.283 "superblock": true, 00:16:04.283 "num_base_bdevs": 4, 00:16:04.283 "num_base_bdevs_discovered": 3, 00:16:04.283 "num_base_bdevs_operational": 3, 00:16:04.283 "base_bdevs_list": [ 00:16:04.283 { 00:16:04.283 "name": "spare", 00:16:04.283 "uuid": "86c7ee8e-283c-5efb-834c-313bcdcafb1e", 00:16:04.283 "is_configured": true, 00:16:04.283 "data_offset": 2048, 00:16:04.283 "data_size": 63488 00:16:04.283 }, 00:16:04.283 { 00:16:04.283 "name": null, 00:16:04.283 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.283 "is_configured": false, 00:16:04.283 "data_offset": 0, 00:16:04.283 "data_size": 63488 00:16:04.283 }, 00:16:04.283 { 00:16:04.283 "name": "BaseBdev3", 00:16:04.283 "uuid": "85e49659-b658-5bbb-8b49-97810c543cce", 00:16:04.283 "is_configured": true, 00:16:04.283 "data_offset": 2048, 00:16:04.283 "data_size": 63488 00:16:04.283 }, 00:16:04.283 { 00:16:04.283 "name": "BaseBdev4", 00:16:04.283 "uuid": "c773a5d3-3b64-5f27-b674-52253499a352", 00:16:04.283 "is_configured": true, 00:16:04.283 "data_offset": 2048, 00:16:04.283 "data_size": 63488 00:16:04.283 } 00:16:04.283 ] 00:16:04.283 }' 00:16:04.283 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:04.283 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:04.283 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:04.541 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:04.541 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:04.541 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:04.541 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:04.541 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:04.541 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:04.541 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:04.541 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:04.541 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:04.541 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:04.541 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:04.541 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:04.541 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:04.541 14:15:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.541 14:15:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.541 14:15:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.541 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:04.542 "name": "raid_bdev1", 00:16:04.542 "uuid": "f7950938-d0c3-4d28-af26-6aa5e68dd81d", 00:16:04.542 "strip_size_kb": 0, 00:16:04.542 "state": "online", 00:16:04.542 "raid_level": "raid1", 00:16:04.542 "superblock": true, 00:16:04.542 "num_base_bdevs": 4, 00:16:04.542 "num_base_bdevs_discovered": 3, 00:16:04.542 "num_base_bdevs_operational": 3, 00:16:04.542 "base_bdevs_list": [ 00:16:04.542 { 00:16:04.542 "name": "spare", 00:16:04.542 "uuid": "86c7ee8e-283c-5efb-834c-313bcdcafb1e", 00:16:04.542 "is_configured": true, 00:16:04.542 "data_offset": 2048, 00:16:04.542 "data_size": 63488 00:16:04.542 }, 00:16:04.542 { 00:16:04.542 "name": null, 00:16:04.542 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.542 "is_configured": false, 00:16:04.542 "data_offset": 0, 00:16:04.542 "data_size": 63488 00:16:04.542 }, 00:16:04.542 { 00:16:04.542 "name": "BaseBdev3", 00:16:04.542 "uuid": "85e49659-b658-5bbb-8b49-97810c543cce", 00:16:04.542 "is_configured": true, 00:16:04.542 "data_offset": 2048, 00:16:04.542 "data_size": 63488 00:16:04.542 }, 00:16:04.542 { 00:16:04.542 "name": "BaseBdev4", 00:16:04.542 "uuid": "c773a5d3-3b64-5f27-b674-52253499a352", 00:16:04.542 "is_configured": true, 00:16:04.542 "data_offset": 2048, 00:16:04.542 "data_size": 63488 00:16:04.542 } 00:16:04.542 ] 00:16:04.542 }' 00:16:04.542 14:15:41 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:04.542 14:15:41 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.109 14:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:05.109 14:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.109 14:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.109 [2024-11-27 14:15:42.100367] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:05.109 [2024-11-27 14:15:42.100546] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:05.109 [2024-11-27 14:15:42.100675] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:05.109 [2024-11-27 14:15:42.100808] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:05.109 [2024-11-27 14:15:42.100829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:05.109 14:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.109 14:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:05.109 14:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:16:05.109 14:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.109 14:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.109 14:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.109 14:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:05.109 14:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:05.109 14:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:16:05.109 14:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:16:05.109 14:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:05.109 14:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:16:05.109 14:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:05.109 14:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:05.109 14:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:05.109 14:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:16:05.109 14:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:05.109 14:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:05.109 14:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:16:05.368 /dev/nbd0 00:16:05.368 14:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:05.368 14:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:05.368 14:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:05.368 14:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:05.368 14:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:05.368 14:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:05.368 14:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:05.368 14:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:05.368 14:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:05.368 14:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:05.368 14:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:05.368 1+0 records in 00:16:05.368 1+0 records out 00:16:05.368 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000633283 s, 6.5 MB/s 00:16:05.368 14:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.368 14:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:05.368 14:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.368 14:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:05.368 14:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:05.368 14:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:05.368 14:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:05.368 14:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:16:05.627 /dev/nbd1 00:16:05.627 14:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:05.627 14:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:05.627 14:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:05.627 14:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:16:05.627 14:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:05.627 14:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:05.627 14:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:05.627 14:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:16:05.627 14:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:05.627 14:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:05.627 14:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:05.627 1+0 records in 00:16:05.627 1+0 records out 00:16:05.627 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000432788 s, 9.5 MB/s 00:16:05.627 14:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.627 14:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:16:05.627 14:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:05.627 14:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:05.628 14:15:42 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:16:05.628 14:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:05.628 14:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:16:05.628 14:15:42 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:05.887 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:16:05.887 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:05.887 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:16:05.887 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:05.887 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:16:05.887 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:05.887 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:06.146 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:06.146 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:06.146 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:06.146 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:06.146 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:06.146 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:06.146 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:06.146 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:06.146 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:06.146 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:06.406 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:06.406 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:06.406 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:06.406 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:06.406 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:06.406 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:06.406 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:16:06.406 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:16:06.406 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:06.406 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:06.406 14:15:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.406 14:15:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.406 14:15:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.406 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:06.406 14:15:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.406 14:15:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.406 [2024-11-27 14:15:43.572869] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:06.406 [2024-11-27 14:15:43.572933] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:06.406 [2024-11-27 14:15:43.572969] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:16:06.406 [2024-11-27 14:15:43.572986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:06.406 [2024-11-27 14:15:43.576011] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:06.406 [2024-11-27 14:15:43.576058] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:06.406 [2024-11-27 14:15:43.576221] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:06.406 [2024-11-27 14:15:43.576283] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:06.406 [2024-11-27 14:15:43.576458] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:06.406 [2024-11-27 14:15:43.576604] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:06.406 spare 00:16:06.406 14:15:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.406 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:06.406 14:15:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.406 14:15:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.406 [2024-11-27 14:15:43.676751] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:06.406 [2024-11-27 14:15:43.676802] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:06.406 [2024-11-27 14:15:43.677381] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1c80 00:16:06.406 [2024-11-27 14:15:43.677651] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:06.406 [2024-11-27 14:15:43.677673] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:06.406 [2024-11-27 14:15:43.677951] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:06.665 14:15:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.665 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:06.665 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:06.665 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:06.665 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:06.665 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:06.665 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:06.665 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:06.665 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:06.666 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:06.666 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:06.666 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.666 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.666 14:15:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.666 14:15:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.666 14:15:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.666 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:06.666 "name": "raid_bdev1", 00:16:06.666 "uuid": "f7950938-d0c3-4d28-af26-6aa5e68dd81d", 00:16:06.666 "strip_size_kb": 0, 00:16:06.666 "state": "online", 00:16:06.666 "raid_level": "raid1", 00:16:06.666 "superblock": true, 00:16:06.666 "num_base_bdevs": 4, 00:16:06.666 "num_base_bdevs_discovered": 3, 00:16:06.666 "num_base_bdevs_operational": 3, 00:16:06.666 "base_bdevs_list": [ 00:16:06.666 { 00:16:06.666 "name": "spare", 00:16:06.666 "uuid": "86c7ee8e-283c-5efb-834c-313bcdcafb1e", 00:16:06.666 "is_configured": true, 00:16:06.666 "data_offset": 2048, 00:16:06.666 "data_size": 63488 00:16:06.666 }, 00:16:06.666 { 00:16:06.666 "name": null, 00:16:06.666 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:06.666 "is_configured": false, 00:16:06.666 "data_offset": 2048, 00:16:06.666 "data_size": 63488 00:16:06.666 }, 00:16:06.666 { 00:16:06.666 "name": "BaseBdev3", 00:16:06.666 "uuid": "85e49659-b658-5bbb-8b49-97810c543cce", 00:16:06.666 "is_configured": true, 00:16:06.666 "data_offset": 2048, 00:16:06.666 "data_size": 63488 00:16:06.666 }, 00:16:06.666 { 00:16:06.666 "name": "BaseBdev4", 00:16:06.666 "uuid": "c773a5d3-3b64-5f27-b674-52253499a352", 00:16:06.666 "is_configured": true, 00:16:06.666 "data_offset": 2048, 00:16:06.666 "data_size": 63488 00:16:06.666 } 00:16:06.666 ] 00:16:06.666 }' 00:16:06.666 14:15:43 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:06.666 14:15:43 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.925 14:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:06.925 14:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:06.925 14:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:06.925 14:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:06.925 14:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:06.925 14:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:06.925 14:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:06.925 14:15:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.925 14:15:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.185 14:15:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.185 14:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:07.185 "name": "raid_bdev1", 00:16:07.185 "uuid": "f7950938-d0c3-4d28-af26-6aa5e68dd81d", 00:16:07.185 "strip_size_kb": 0, 00:16:07.185 "state": "online", 00:16:07.185 "raid_level": "raid1", 00:16:07.185 "superblock": true, 00:16:07.185 "num_base_bdevs": 4, 00:16:07.185 "num_base_bdevs_discovered": 3, 00:16:07.185 "num_base_bdevs_operational": 3, 00:16:07.185 "base_bdevs_list": [ 00:16:07.185 { 00:16:07.185 "name": "spare", 00:16:07.185 "uuid": "86c7ee8e-283c-5efb-834c-313bcdcafb1e", 00:16:07.185 "is_configured": true, 00:16:07.185 "data_offset": 2048, 00:16:07.185 "data_size": 63488 00:16:07.185 }, 00:16:07.185 { 00:16:07.185 "name": null, 00:16:07.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.185 "is_configured": false, 00:16:07.185 "data_offset": 2048, 00:16:07.185 "data_size": 63488 00:16:07.185 }, 00:16:07.185 { 00:16:07.185 "name": "BaseBdev3", 00:16:07.185 "uuid": "85e49659-b658-5bbb-8b49-97810c543cce", 00:16:07.185 "is_configured": true, 00:16:07.185 "data_offset": 2048, 00:16:07.185 "data_size": 63488 00:16:07.185 }, 00:16:07.185 { 00:16:07.185 "name": "BaseBdev4", 00:16:07.185 "uuid": "c773a5d3-3b64-5f27-b674-52253499a352", 00:16:07.185 "is_configured": true, 00:16:07.185 "data_offset": 2048, 00:16:07.185 "data_size": 63488 00:16:07.185 } 00:16:07.185 ] 00:16:07.185 }' 00:16:07.185 14:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:07.185 14:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:07.185 14:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:07.185 14:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:07.185 14:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.185 14:15:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.185 14:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:07.185 14:15:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.185 14:15:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.185 14:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:07.185 14:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:07.185 14:15:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.185 14:15:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.185 [2024-11-27 14:15:44.390233] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:07.185 14:15:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.185 14:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:07.185 14:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:07.185 14:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:07.185 14:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:07.185 14:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:07.185 14:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:07.185 14:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:07.185 14:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:07.185 14:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:07.185 14:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:07.185 14:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:07.185 14:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:07.185 14:15:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.185 14:15:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.185 14:15:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.185 14:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:07.185 "name": "raid_bdev1", 00:16:07.185 "uuid": "f7950938-d0c3-4d28-af26-6aa5e68dd81d", 00:16:07.185 "strip_size_kb": 0, 00:16:07.185 "state": "online", 00:16:07.185 "raid_level": "raid1", 00:16:07.185 "superblock": true, 00:16:07.185 "num_base_bdevs": 4, 00:16:07.185 "num_base_bdevs_discovered": 2, 00:16:07.185 "num_base_bdevs_operational": 2, 00:16:07.185 "base_bdevs_list": [ 00:16:07.185 { 00:16:07.185 "name": null, 00:16:07.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.185 "is_configured": false, 00:16:07.185 "data_offset": 0, 00:16:07.185 "data_size": 63488 00:16:07.185 }, 00:16:07.185 { 00:16:07.185 "name": null, 00:16:07.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:07.185 "is_configured": false, 00:16:07.185 "data_offset": 2048, 00:16:07.185 "data_size": 63488 00:16:07.185 }, 00:16:07.185 { 00:16:07.185 "name": "BaseBdev3", 00:16:07.185 "uuid": "85e49659-b658-5bbb-8b49-97810c543cce", 00:16:07.185 "is_configured": true, 00:16:07.185 "data_offset": 2048, 00:16:07.185 "data_size": 63488 00:16:07.185 }, 00:16:07.185 { 00:16:07.185 "name": "BaseBdev4", 00:16:07.185 "uuid": "c773a5d3-3b64-5f27-b674-52253499a352", 00:16:07.185 "is_configured": true, 00:16:07.185 "data_offset": 2048, 00:16:07.185 "data_size": 63488 00:16:07.185 } 00:16:07.185 ] 00:16:07.185 }' 00:16:07.185 14:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:07.185 14:15:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.752 14:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:07.752 14:15:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.752 14:15:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.752 [2024-11-27 14:15:44.910402] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:07.752 [2024-11-27 14:15:44.910689] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:07.752 [2024-11-27 14:15:44.910713] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:07.752 [2024-11-27 14:15:44.910766] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:07.752 [2024-11-27 14:15:44.923917] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1d50 00:16:07.752 14:15:44 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.752 14:15:44 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:07.752 [2024-11-27 14:15:44.926604] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:08.689 14:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:08.689 14:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:08.689 14:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:08.689 14:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:08.689 14:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:08.689 14:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.689 14:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.689 14:15:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.689 14:15:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.689 14:15:45 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.948 14:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:08.948 "name": "raid_bdev1", 00:16:08.948 "uuid": "f7950938-d0c3-4d28-af26-6aa5e68dd81d", 00:16:08.948 "strip_size_kb": 0, 00:16:08.948 "state": "online", 00:16:08.948 "raid_level": "raid1", 00:16:08.948 "superblock": true, 00:16:08.948 "num_base_bdevs": 4, 00:16:08.948 "num_base_bdevs_discovered": 3, 00:16:08.948 "num_base_bdevs_operational": 3, 00:16:08.948 "process": { 00:16:08.948 "type": "rebuild", 00:16:08.948 "target": "spare", 00:16:08.948 "progress": { 00:16:08.948 "blocks": 20480, 00:16:08.948 "percent": 32 00:16:08.948 } 00:16:08.948 }, 00:16:08.948 "base_bdevs_list": [ 00:16:08.948 { 00:16:08.948 "name": "spare", 00:16:08.948 "uuid": "86c7ee8e-283c-5efb-834c-313bcdcafb1e", 00:16:08.948 "is_configured": true, 00:16:08.948 "data_offset": 2048, 00:16:08.948 "data_size": 63488 00:16:08.948 }, 00:16:08.948 { 00:16:08.948 "name": null, 00:16:08.948 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.948 "is_configured": false, 00:16:08.948 "data_offset": 2048, 00:16:08.948 "data_size": 63488 00:16:08.948 }, 00:16:08.948 { 00:16:08.948 "name": "BaseBdev3", 00:16:08.948 "uuid": "85e49659-b658-5bbb-8b49-97810c543cce", 00:16:08.948 "is_configured": true, 00:16:08.948 "data_offset": 2048, 00:16:08.948 "data_size": 63488 00:16:08.948 }, 00:16:08.948 { 00:16:08.948 "name": "BaseBdev4", 00:16:08.948 "uuid": "c773a5d3-3b64-5f27-b674-52253499a352", 00:16:08.948 "is_configured": true, 00:16:08.948 "data_offset": 2048, 00:16:08.948 "data_size": 63488 00:16:08.948 } 00:16:08.948 ] 00:16:08.948 }' 00:16:08.949 14:15:45 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:08.949 14:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:08.949 14:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:08.949 14:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:08.949 14:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:08.949 14:15:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.949 14:15:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.949 [2024-11-27 14:15:46.112069] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:08.949 [2024-11-27 14:15:46.135835] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:08.949 [2024-11-27 14:15:46.135957] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:08.949 [2024-11-27 14:15:46.135988] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:08.949 [2024-11-27 14:15:46.136000] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:08.949 14:15:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.949 14:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:08.949 14:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:08.949 14:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:08.949 14:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:08.949 14:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:08.949 14:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:08.949 14:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:08.949 14:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:08.949 14:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:08.949 14:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:08.949 14:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:08.949 14:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:08.949 14:15:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.949 14:15:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.949 14:15:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.949 14:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:08.949 "name": "raid_bdev1", 00:16:08.949 "uuid": "f7950938-d0c3-4d28-af26-6aa5e68dd81d", 00:16:08.949 "strip_size_kb": 0, 00:16:08.949 "state": "online", 00:16:08.949 "raid_level": "raid1", 00:16:08.949 "superblock": true, 00:16:08.949 "num_base_bdevs": 4, 00:16:08.949 "num_base_bdevs_discovered": 2, 00:16:08.949 "num_base_bdevs_operational": 2, 00:16:08.949 "base_bdevs_list": [ 00:16:08.949 { 00:16:08.949 "name": null, 00:16:08.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.949 "is_configured": false, 00:16:08.949 "data_offset": 0, 00:16:08.949 "data_size": 63488 00:16:08.949 }, 00:16:08.949 { 00:16:08.949 "name": null, 00:16:08.949 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:08.949 "is_configured": false, 00:16:08.949 "data_offset": 2048, 00:16:08.949 "data_size": 63488 00:16:08.949 }, 00:16:08.949 { 00:16:08.949 "name": "BaseBdev3", 00:16:08.949 "uuid": "85e49659-b658-5bbb-8b49-97810c543cce", 00:16:08.949 "is_configured": true, 00:16:08.949 "data_offset": 2048, 00:16:08.949 "data_size": 63488 00:16:08.949 }, 00:16:08.949 { 00:16:08.949 "name": "BaseBdev4", 00:16:08.949 "uuid": "c773a5d3-3b64-5f27-b674-52253499a352", 00:16:08.949 "is_configured": true, 00:16:08.949 "data_offset": 2048, 00:16:08.949 "data_size": 63488 00:16:08.949 } 00:16:08.949 ] 00:16:08.949 }' 00:16:08.949 14:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:08.949 14:15:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.518 14:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:09.518 14:15:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.518 14:15:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.518 [2024-11-27 14:15:46.643840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:09.518 [2024-11-27 14:15:46.643920] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:09.518 [2024-11-27 14:15:46.643968] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:16:09.518 [2024-11-27 14:15:46.643986] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:09.518 [2024-11-27 14:15:46.644606] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:09.518 [2024-11-27 14:15:46.644632] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:09.518 [2024-11-27 14:15:46.644753] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:09.518 [2024-11-27 14:15:46.644774] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:09.518 [2024-11-27 14:15:46.644835] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:09.518 [2024-11-27 14:15:46.644872] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:09.518 [2024-11-27 14:15:46.658361] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000cc1e20 00:16:09.518 spare 00:16:09.518 14:15:46 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.518 14:15:46 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:09.518 [2024-11-27 14:15:46.660947] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:10.455 14:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:10.455 14:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:10.455 14:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:10.455 14:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:10.455 14:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:10.455 14:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.455 14:15:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.455 14:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.455 14:15:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.455 14:15:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.455 14:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:10.455 "name": "raid_bdev1", 00:16:10.455 "uuid": "f7950938-d0c3-4d28-af26-6aa5e68dd81d", 00:16:10.455 "strip_size_kb": 0, 00:16:10.455 "state": "online", 00:16:10.455 "raid_level": "raid1", 00:16:10.455 "superblock": true, 00:16:10.455 "num_base_bdevs": 4, 00:16:10.455 "num_base_bdevs_discovered": 3, 00:16:10.455 "num_base_bdevs_operational": 3, 00:16:10.455 "process": { 00:16:10.455 "type": "rebuild", 00:16:10.455 "target": "spare", 00:16:10.455 "progress": { 00:16:10.455 "blocks": 20480, 00:16:10.455 "percent": 32 00:16:10.455 } 00:16:10.455 }, 00:16:10.455 "base_bdevs_list": [ 00:16:10.455 { 00:16:10.455 "name": "spare", 00:16:10.455 "uuid": "86c7ee8e-283c-5efb-834c-313bcdcafb1e", 00:16:10.455 "is_configured": true, 00:16:10.455 "data_offset": 2048, 00:16:10.455 "data_size": 63488 00:16:10.455 }, 00:16:10.455 { 00:16:10.455 "name": null, 00:16:10.455 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.455 "is_configured": false, 00:16:10.455 "data_offset": 2048, 00:16:10.455 "data_size": 63488 00:16:10.455 }, 00:16:10.455 { 00:16:10.455 "name": "BaseBdev3", 00:16:10.455 "uuid": "85e49659-b658-5bbb-8b49-97810c543cce", 00:16:10.455 "is_configured": true, 00:16:10.455 "data_offset": 2048, 00:16:10.455 "data_size": 63488 00:16:10.455 }, 00:16:10.455 { 00:16:10.455 "name": "BaseBdev4", 00:16:10.455 "uuid": "c773a5d3-3b64-5f27-b674-52253499a352", 00:16:10.455 "is_configured": true, 00:16:10.455 "data_offset": 2048, 00:16:10.455 "data_size": 63488 00:16:10.455 } 00:16:10.455 ] 00:16:10.455 }' 00:16:10.455 14:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:10.713 14:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:10.713 14:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:10.713 14:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:10.714 14:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:10.714 14:15:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.714 14:15:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.714 [2024-11-27 14:15:47.830146] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:10.714 [2024-11-27 14:15:47.870073] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:10.714 [2024-11-27 14:15:47.870342] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.714 [2024-11-27 14:15:47.870373] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:10.714 [2024-11-27 14:15:47.870390] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:10.714 14:15:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.714 14:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:10.714 14:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:10.714 14:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:10.714 14:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:10.714 14:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:10.714 14:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:10.714 14:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:10.714 14:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:10.714 14:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:10.714 14:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:10.714 14:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:10.714 14:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:10.714 14:15:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.714 14:15:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:10.714 14:15:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.714 14:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:10.714 "name": "raid_bdev1", 00:16:10.714 "uuid": "f7950938-d0c3-4d28-af26-6aa5e68dd81d", 00:16:10.714 "strip_size_kb": 0, 00:16:10.714 "state": "online", 00:16:10.714 "raid_level": "raid1", 00:16:10.714 "superblock": true, 00:16:10.714 "num_base_bdevs": 4, 00:16:10.714 "num_base_bdevs_discovered": 2, 00:16:10.714 "num_base_bdevs_operational": 2, 00:16:10.714 "base_bdevs_list": [ 00:16:10.714 { 00:16:10.714 "name": null, 00:16:10.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.714 "is_configured": false, 00:16:10.714 "data_offset": 0, 00:16:10.714 "data_size": 63488 00:16:10.714 }, 00:16:10.714 { 00:16:10.714 "name": null, 00:16:10.714 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:10.714 "is_configured": false, 00:16:10.714 "data_offset": 2048, 00:16:10.714 "data_size": 63488 00:16:10.714 }, 00:16:10.714 { 00:16:10.714 "name": "BaseBdev3", 00:16:10.714 "uuid": "85e49659-b658-5bbb-8b49-97810c543cce", 00:16:10.714 "is_configured": true, 00:16:10.714 "data_offset": 2048, 00:16:10.714 "data_size": 63488 00:16:10.714 }, 00:16:10.714 { 00:16:10.714 "name": "BaseBdev4", 00:16:10.714 "uuid": "c773a5d3-3b64-5f27-b674-52253499a352", 00:16:10.714 "is_configured": true, 00:16:10.714 "data_offset": 2048, 00:16:10.714 "data_size": 63488 00:16:10.714 } 00:16:10.714 ] 00:16:10.714 }' 00:16:10.714 14:15:47 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:10.714 14:15:47 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.287 14:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:11.287 14:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:11.287 14:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:11.287 14:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:11.287 14:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:11.287 14:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:11.287 14:15:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.287 14:15:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.287 14:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:11.287 14:15:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.287 14:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:11.287 "name": "raid_bdev1", 00:16:11.287 "uuid": "f7950938-d0c3-4d28-af26-6aa5e68dd81d", 00:16:11.287 "strip_size_kb": 0, 00:16:11.287 "state": "online", 00:16:11.287 "raid_level": "raid1", 00:16:11.287 "superblock": true, 00:16:11.287 "num_base_bdevs": 4, 00:16:11.287 "num_base_bdevs_discovered": 2, 00:16:11.287 "num_base_bdevs_operational": 2, 00:16:11.287 "base_bdevs_list": [ 00:16:11.287 { 00:16:11.287 "name": null, 00:16:11.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.287 "is_configured": false, 00:16:11.287 "data_offset": 0, 00:16:11.287 "data_size": 63488 00:16:11.287 }, 00:16:11.287 { 00:16:11.287 "name": null, 00:16:11.287 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:11.287 "is_configured": false, 00:16:11.287 "data_offset": 2048, 00:16:11.287 "data_size": 63488 00:16:11.287 }, 00:16:11.287 { 00:16:11.287 "name": "BaseBdev3", 00:16:11.287 "uuid": "85e49659-b658-5bbb-8b49-97810c543cce", 00:16:11.287 "is_configured": true, 00:16:11.287 "data_offset": 2048, 00:16:11.287 "data_size": 63488 00:16:11.287 }, 00:16:11.287 { 00:16:11.287 "name": "BaseBdev4", 00:16:11.287 "uuid": "c773a5d3-3b64-5f27-b674-52253499a352", 00:16:11.287 "is_configured": true, 00:16:11.287 "data_offset": 2048, 00:16:11.287 "data_size": 63488 00:16:11.287 } 00:16:11.287 ] 00:16:11.287 }' 00:16:11.287 14:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:11.287 14:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:11.287 14:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:11.287 14:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:11.287 14:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:11.287 14:15:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.287 14:15:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.610 14:15:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.610 14:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:11.610 14:15:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.610 14:15:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.610 [2024-11-27 14:15:48.569994] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:11.610 [2024-11-27 14:15:48.570067] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:11.610 [2024-11-27 14:15:48.570095] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:16:11.610 [2024-11-27 14:15:48.570114] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:11.610 [2024-11-27 14:15:48.570703] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:11.610 [2024-11-27 14:15:48.570742] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:11.610 [2024-11-27 14:15:48.570859] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:11.610 [2024-11-27 14:15:48.570888] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:11.610 [2024-11-27 14:15:48.570899] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:11.610 [2024-11-27 14:15:48.570934] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:11.610 BaseBdev1 00:16:11.610 14:15:48 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.610 14:15:48 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:12.593 14:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:12.593 14:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:12.593 14:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:12.593 14:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:12.593 14:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:12.593 14:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:12.593 14:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:12.593 14:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:12.593 14:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:12.593 14:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:12.593 14:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.593 14:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.593 14:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.593 14:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.593 14:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.593 14:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:12.593 "name": "raid_bdev1", 00:16:12.593 "uuid": "f7950938-d0c3-4d28-af26-6aa5e68dd81d", 00:16:12.593 "strip_size_kb": 0, 00:16:12.593 "state": "online", 00:16:12.593 "raid_level": "raid1", 00:16:12.593 "superblock": true, 00:16:12.593 "num_base_bdevs": 4, 00:16:12.593 "num_base_bdevs_discovered": 2, 00:16:12.593 "num_base_bdevs_operational": 2, 00:16:12.593 "base_bdevs_list": [ 00:16:12.593 { 00:16:12.593 "name": null, 00:16:12.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.593 "is_configured": false, 00:16:12.593 "data_offset": 0, 00:16:12.593 "data_size": 63488 00:16:12.593 }, 00:16:12.593 { 00:16:12.593 "name": null, 00:16:12.593 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:12.593 "is_configured": false, 00:16:12.593 "data_offset": 2048, 00:16:12.593 "data_size": 63488 00:16:12.593 }, 00:16:12.593 { 00:16:12.593 "name": "BaseBdev3", 00:16:12.593 "uuid": "85e49659-b658-5bbb-8b49-97810c543cce", 00:16:12.593 "is_configured": true, 00:16:12.593 "data_offset": 2048, 00:16:12.594 "data_size": 63488 00:16:12.594 }, 00:16:12.594 { 00:16:12.594 "name": "BaseBdev4", 00:16:12.594 "uuid": "c773a5d3-3b64-5f27-b674-52253499a352", 00:16:12.594 "is_configured": true, 00:16:12.594 "data_offset": 2048, 00:16:12.594 "data_size": 63488 00:16:12.594 } 00:16:12.594 ] 00:16:12.594 }' 00:16:12.594 14:15:49 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:12.594 14:15:49 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.852 14:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:12.852 14:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:12.852 14:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:12.852 14:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:12.852 14:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:12.852 14:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:12.852 14:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:12.852 14:15:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.852 14:15:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:12.852 14:15:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.111 14:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:13.111 "name": "raid_bdev1", 00:16:13.111 "uuid": "f7950938-d0c3-4d28-af26-6aa5e68dd81d", 00:16:13.111 "strip_size_kb": 0, 00:16:13.111 "state": "online", 00:16:13.111 "raid_level": "raid1", 00:16:13.111 "superblock": true, 00:16:13.111 "num_base_bdevs": 4, 00:16:13.111 "num_base_bdevs_discovered": 2, 00:16:13.111 "num_base_bdevs_operational": 2, 00:16:13.111 "base_bdevs_list": [ 00:16:13.111 { 00:16:13.111 "name": null, 00:16:13.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.111 "is_configured": false, 00:16:13.111 "data_offset": 0, 00:16:13.111 "data_size": 63488 00:16:13.111 }, 00:16:13.111 { 00:16:13.111 "name": null, 00:16:13.111 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:13.111 "is_configured": false, 00:16:13.111 "data_offset": 2048, 00:16:13.111 "data_size": 63488 00:16:13.111 }, 00:16:13.111 { 00:16:13.111 "name": "BaseBdev3", 00:16:13.111 "uuid": "85e49659-b658-5bbb-8b49-97810c543cce", 00:16:13.111 "is_configured": true, 00:16:13.111 "data_offset": 2048, 00:16:13.111 "data_size": 63488 00:16:13.111 }, 00:16:13.111 { 00:16:13.111 "name": "BaseBdev4", 00:16:13.111 "uuid": "c773a5d3-3b64-5f27-b674-52253499a352", 00:16:13.111 "is_configured": true, 00:16:13.111 "data_offset": 2048, 00:16:13.111 "data_size": 63488 00:16:13.111 } 00:16:13.111 ] 00:16:13.111 }' 00:16:13.111 14:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:13.111 14:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:13.111 14:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:13.111 14:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:13.111 14:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:13.111 14:15:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:16:13.111 14:15:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:13.111 14:15:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:13.111 14:15:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:13.111 14:15:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:13.111 14:15:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:13.111 14:15:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:13.111 14:15:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.111 14:15:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.111 [2024-11-27 14:15:50.262472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:13.111 [2024-11-27 14:15:50.262737] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:13.111 [2024-11-27 14:15:50.262761] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:13.111 request: 00:16:13.111 { 00:16:13.111 "base_bdev": "BaseBdev1", 00:16:13.111 "raid_bdev": "raid_bdev1", 00:16:13.111 "method": "bdev_raid_add_base_bdev", 00:16:13.111 "req_id": 1 00:16:13.111 } 00:16:13.111 Got JSON-RPC error response 00:16:13.111 response: 00:16:13.111 { 00:16:13.111 "code": -22, 00:16:13.111 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:13.111 } 00:16:13.111 14:15:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:13.111 14:15:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:16:13.111 14:15:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:13.111 14:15:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:13.111 14:15:50 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:13.111 14:15:50 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:14.048 14:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:14.048 14:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:14.048 14:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:14.048 14:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:14.048 14:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:14.048 14:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:14.048 14:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:14.048 14:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:14.048 14:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:14.048 14:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:14.048 14:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.048 14:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.048 14:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.048 14:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.048 14:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.306 14:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:14.306 "name": "raid_bdev1", 00:16:14.306 "uuid": "f7950938-d0c3-4d28-af26-6aa5e68dd81d", 00:16:14.306 "strip_size_kb": 0, 00:16:14.306 "state": "online", 00:16:14.306 "raid_level": "raid1", 00:16:14.306 "superblock": true, 00:16:14.306 "num_base_bdevs": 4, 00:16:14.306 "num_base_bdevs_discovered": 2, 00:16:14.307 "num_base_bdevs_operational": 2, 00:16:14.307 "base_bdevs_list": [ 00:16:14.307 { 00:16:14.307 "name": null, 00:16:14.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.307 "is_configured": false, 00:16:14.307 "data_offset": 0, 00:16:14.307 "data_size": 63488 00:16:14.307 }, 00:16:14.307 { 00:16:14.307 "name": null, 00:16:14.307 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.307 "is_configured": false, 00:16:14.307 "data_offset": 2048, 00:16:14.307 "data_size": 63488 00:16:14.307 }, 00:16:14.307 { 00:16:14.307 "name": "BaseBdev3", 00:16:14.307 "uuid": "85e49659-b658-5bbb-8b49-97810c543cce", 00:16:14.307 "is_configured": true, 00:16:14.307 "data_offset": 2048, 00:16:14.307 "data_size": 63488 00:16:14.307 }, 00:16:14.307 { 00:16:14.307 "name": "BaseBdev4", 00:16:14.307 "uuid": "c773a5d3-3b64-5f27-b674-52253499a352", 00:16:14.307 "is_configured": true, 00:16:14.307 "data_offset": 2048, 00:16:14.307 "data_size": 63488 00:16:14.307 } 00:16:14.307 ] 00:16:14.307 }' 00:16:14.307 14:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:14.307 14:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.565 14:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:14.565 14:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:14.565 14:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:14.565 14:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:14.565 14:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:14.565 14:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:14.565 14:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.565 14:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:14.565 14:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:14.565 14:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.565 14:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:14.565 "name": "raid_bdev1", 00:16:14.565 "uuid": "f7950938-d0c3-4d28-af26-6aa5e68dd81d", 00:16:14.565 "strip_size_kb": 0, 00:16:14.565 "state": "online", 00:16:14.565 "raid_level": "raid1", 00:16:14.565 "superblock": true, 00:16:14.565 "num_base_bdevs": 4, 00:16:14.565 "num_base_bdevs_discovered": 2, 00:16:14.565 "num_base_bdevs_operational": 2, 00:16:14.565 "base_bdevs_list": [ 00:16:14.565 { 00:16:14.565 "name": null, 00:16:14.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.565 "is_configured": false, 00:16:14.565 "data_offset": 0, 00:16:14.565 "data_size": 63488 00:16:14.565 }, 00:16:14.565 { 00:16:14.565 "name": null, 00:16:14.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:14.565 "is_configured": false, 00:16:14.565 "data_offset": 2048, 00:16:14.565 "data_size": 63488 00:16:14.565 }, 00:16:14.565 { 00:16:14.565 "name": "BaseBdev3", 00:16:14.565 "uuid": "85e49659-b658-5bbb-8b49-97810c543cce", 00:16:14.565 "is_configured": true, 00:16:14.565 "data_offset": 2048, 00:16:14.565 "data_size": 63488 00:16:14.565 }, 00:16:14.565 { 00:16:14.565 "name": "BaseBdev4", 00:16:14.565 "uuid": "c773a5d3-3b64-5f27-b674-52253499a352", 00:16:14.565 "is_configured": true, 00:16:14.565 "data_offset": 2048, 00:16:14.565 "data_size": 63488 00:16:14.565 } 00:16:14.565 ] 00:16:14.565 }' 00:16:14.565 14:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:14.824 14:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:14.824 14:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:14.824 14:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:14.824 14:15:51 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 78176 00:16:14.824 14:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 78176 ']' 00:16:14.824 14:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 78176 00:16:14.824 14:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:16:14.824 14:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:14.824 14:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78176 00:16:14.824 killing process with pid 78176 00:16:14.824 Received shutdown signal, test time was about 60.000000 seconds 00:16:14.824 00:16:14.825 Latency(us) 00:16:14.825 [2024-11-27T14:15:52.103Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:14.825 [2024-11-27T14:15:52.103Z] =================================================================================================================== 00:16:14.825 [2024-11-27T14:15:52.103Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:16:14.825 14:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:14.825 14:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:14.825 14:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78176' 00:16:14.825 14:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 78176 00:16:14.825 [2024-11-27 14:15:51.967789] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:14.825 14:15:51 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 78176 00:16:14.825 [2024-11-27 14:15:51.967939] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:14.825 [2024-11-27 14:15:51.968036] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:14.825 [2024-11-27 14:15:51.968054] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:15.389 [2024-11-27 14:15:52.422004] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:16:16.320 00:16:16.320 real 0m29.586s 00:16:16.320 user 0m35.886s 00:16:16.320 sys 0m4.199s 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:16.320 ************************************ 00:16:16.320 END TEST raid_rebuild_test_sb 00:16:16.320 ************************************ 00:16:16.320 14:15:53 bdev_raid -- bdev/bdev_raid.sh@980 -- # run_test raid_rebuild_test_io raid_rebuild_test raid1 4 false true true 00:16:16.320 14:15:53 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:16.320 14:15:53 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:16.320 14:15:53 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:16.320 ************************************ 00:16:16.320 START TEST raid_rebuild_test_io 00:16:16.320 ************************************ 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 false true true 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@597 -- # raid_pid=78974 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 78974 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@835 -- # '[' -z 78974 ']' 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:16.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:16.320 14:15:53 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:16.577 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:16.577 Zero copy mechanism will not be used. 00:16:16.577 [2024-11-27 14:15:53.660510] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:16:16.577 [2024-11-27 14:15:53.660673] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78974 ] 00:16:16.577 [2024-11-27 14:15:53.828565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:16.834 [2024-11-27 14:15:53.959850] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:17.092 [2024-11-27 14:15:54.211269] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:17.092 [2024-11-27 14:15:54.211346] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@868 -- # return 0 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.659 BaseBdev1_malloc 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.659 [2024-11-27 14:15:54.772109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:17.659 [2024-11-27 14:15:54.772188] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.659 [2024-11-27 14:15:54.772220] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:17.659 [2024-11-27 14:15:54.772238] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.659 [2024-11-27 14:15:54.775056] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.659 [2024-11-27 14:15:54.775107] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:17.659 BaseBdev1 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.659 BaseBdev2_malloc 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.659 [2024-11-27 14:15:54.824213] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:17.659 [2024-11-27 14:15:54.824297] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.659 [2024-11-27 14:15:54.824329] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:17.659 [2024-11-27 14:15:54.824347] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.659 [2024-11-27 14:15:54.827180] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.659 [2024-11-27 14:15:54.827231] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:17.659 BaseBdev2 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.659 BaseBdev3_malloc 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.659 [2024-11-27 14:15:54.887566] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:17.659 [2024-11-27 14:15:54.887638] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.659 [2024-11-27 14:15:54.887672] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:17.659 [2024-11-27 14:15:54.887690] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.659 [2024-11-27 14:15:54.890396] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.659 [2024-11-27 14:15:54.890578] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:17.659 BaseBdev3 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.659 BaseBdev4_malloc 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.659 14:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.918 [2024-11-27 14:15:54.939514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:17.918 [2024-11-27 14:15:54.939591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.918 [2024-11-27 14:15:54.939623] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:17.918 [2024-11-27 14:15:54.939640] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.918 [2024-11-27 14:15:54.942364] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.918 [2024-11-27 14:15:54.942424] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:17.918 BaseBdev4 00:16:17.918 14:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.918 14:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:17.918 14:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.918 14:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.918 spare_malloc 00:16:17.918 14:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.918 14:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:17.918 14:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.918 14:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.918 spare_delay 00:16:17.918 14:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.918 14:15:54 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:17.918 14:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.918 14:15:54 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.918 [2024-11-27 14:15:54.999683] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:17.918 [2024-11-27 14:15:54.999755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:17.918 [2024-11-27 14:15:54.999806] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:17.918 [2024-11-27 14:15:54.999827] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:17.918 [2024-11-27 14:15:55.002544] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:17.918 [2024-11-27 14:15:55.002595] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:17.918 spare 00:16:17.918 14:15:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.918 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:17.918 14:15:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.918 14:15:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.918 [2024-11-27 14:15:55.007733] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:17.918 [2024-11-27 14:15:55.010255] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:17.918 [2024-11-27 14:15:55.010466] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:17.918 [2024-11-27 14:15:55.010673] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:17.918 [2024-11-27 14:15:55.011011] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:17.918 [2024-11-27 14:15:55.011163] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:16:17.918 [2024-11-27 14:15:55.011538] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:17.918 [2024-11-27 14:15:55.011896] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:17.918 [2024-11-27 14:15:55.012027] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:17.918 [2024-11-27 14:15:55.012443] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:17.918 14:15:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.918 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:17.918 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:17.918 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:17.918 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:17.918 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:17.918 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:17.918 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:17.918 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:17.918 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:17.918 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:17.918 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:17.918 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.918 14:15:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.918 14:15:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:17.918 14:15:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.918 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:17.918 "name": "raid_bdev1", 00:16:17.919 "uuid": "02c8f69f-01c6-4a1f-967a-34c9ad7f7e3c", 00:16:17.919 "strip_size_kb": 0, 00:16:17.919 "state": "online", 00:16:17.919 "raid_level": "raid1", 00:16:17.919 "superblock": false, 00:16:17.919 "num_base_bdevs": 4, 00:16:17.919 "num_base_bdevs_discovered": 4, 00:16:17.919 "num_base_bdevs_operational": 4, 00:16:17.919 "base_bdevs_list": [ 00:16:17.919 { 00:16:17.919 "name": "BaseBdev1", 00:16:17.919 "uuid": "9d07086e-b2c1-59bd-ba33-1a928eca21fc", 00:16:17.919 "is_configured": true, 00:16:17.919 "data_offset": 0, 00:16:17.919 "data_size": 65536 00:16:17.919 }, 00:16:17.919 { 00:16:17.919 "name": "BaseBdev2", 00:16:17.919 "uuid": "9be1cb30-334d-5bb2-9b96-75241eb1b873", 00:16:17.919 "is_configured": true, 00:16:17.919 "data_offset": 0, 00:16:17.919 "data_size": 65536 00:16:17.919 }, 00:16:17.919 { 00:16:17.919 "name": "BaseBdev3", 00:16:17.919 "uuid": "51a961e6-9248-522e-8493-1ec3393f14b1", 00:16:17.919 "is_configured": true, 00:16:17.919 "data_offset": 0, 00:16:17.919 "data_size": 65536 00:16:17.919 }, 00:16:17.919 { 00:16:17.919 "name": "BaseBdev4", 00:16:17.919 "uuid": "12a8d64b-fb7d-5bc0-84db-58246d62f197", 00:16:17.919 "is_configured": true, 00:16:17.919 "data_offset": 0, 00:16:17.919 "data_size": 65536 00:16:17.919 } 00:16:17.919 ] 00:16:17.919 }' 00:16:17.919 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:17.919 14:15:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.555 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:18.555 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:18.555 14:15:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.555 14:15:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.555 [2024-11-27 14:15:55.544989] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:18.555 14:15:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.555 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=65536 00:16:18.555 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.556 14:15:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.556 14:15:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.556 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:18.556 14:15:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.556 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:16:18.556 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:18.556 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:18.556 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:18.556 14:15:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.556 14:15:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.556 [2024-11-27 14:15:55.648592] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:18.556 14:15:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.556 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:18.556 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:18.556 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:18.556 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:18.556 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:18.556 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:18.556 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:18.556 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:18.556 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:18.556 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:18.556 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:18.556 14:15:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.556 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:18.556 14:15:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.556 14:15:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.556 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:18.556 "name": "raid_bdev1", 00:16:18.556 "uuid": "02c8f69f-01c6-4a1f-967a-34c9ad7f7e3c", 00:16:18.556 "strip_size_kb": 0, 00:16:18.556 "state": "online", 00:16:18.556 "raid_level": "raid1", 00:16:18.556 "superblock": false, 00:16:18.556 "num_base_bdevs": 4, 00:16:18.556 "num_base_bdevs_discovered": 3, 00:16:18.556 "num_base_bdevs_operational": 3, 00:16:18.556 "base_bdevs_list": [ 00:16:18.556 { 00:16:18.556 "name": null, 00:16:18.556 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:18.556 "is_configured": false, 00:16:18.556 "data_offset": 0, 00:16:18.556 "data_size": 65536 00:16:18.556 }, 00:16:18.556 { 00:16:18.556 "name": "BaseBdev2", 00:16:18.556 "uuid": "9be1cb30-334d-5bb2-9b96-75241eb1b873", 00:16:18.556 "is_configured": true, 00:16:18.556 "data_offset": 0, 00:16:18.556 "data_size": 65536 00:16:18.556 }, 00:16:18.556 { 00:16:18.556 "name": "BaseBdev3", 00:16:18.556 "uuid": "51a961e6-9248-522e-8493-1ec3393f14b1", 00:16:18.556 "is_configured": true, 00:16:18.556 "data_offset": 0, 00:16:18.556 "data_size": 65536 00:16:18.556 }, 00:16:18.556 { 00:16:18.556 "name": "BaseBdev4", 00:16:18.556 "uuid": "12a8d64b-fb7d-5bc0-84db-58246d62f197", 00:16:18.556 "is_configured": true, 00:16:18.556 "data_offset": 0, 00:16:18.556 "data_size": 65536 00:16:18.556 } 00:16:18.556 ] 00:16:18.556 }' 00:16:18.556 14:15:55 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:18.556 14:15:55 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:18.556 [2024-11-27 14:15:55.780639] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:18.556 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:18.556 Zero copy mechanism will not be used. 00:16:18.556 Running I/O for 60 seconds... 00:16:19.122 14:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:19.122 14:15:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.122 14:15:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:19.122 [2024-11-27 14:15:56.188150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:19.122 14:15:56 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.122 14:15:56 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:19.122 [2024-11-27 14:15:56.255916] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:16:19.122 [2024-11-27 14:15:56.258798] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:19.122 [2024-11-27 14:15:56.362076] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:19.122 [2024-11-27 14:15:56.363995] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:19.381 [2024-11-27 14:15:56.605497] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:19.381 [2024-11-27 14:15:56.606598] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:19.900 124.00 IOPS, 372.00 MiB/s [2024-11-27T14:15:57.178Z] [2024-11-27 14:15:56.956159] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:20.159 14:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:20.159 14:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.159 14:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:20.159 14:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:20.159 14:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.159 14:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.159 14:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.159 14:15:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.159 14:15:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:20.159 14:15:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.159 14:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:20.159 "name": "raid_bdev1", 00:16:20.159 "uuid": "02c8f69f-01c6-4a1f-967a-34c9ad7f7e3c", 00:16:20.159 "strip_size_kb": 0, 00:16:20.159 "state": "online", 00:16:20.159 "raid_level": "raid1", 00:16:20.159 "superblock": false, 00:16:20.159 "num_base_bdevs": 4, 00:16:20.159 "num_base_bdevs_discovered": 4, 00:16:20.159 "num_base_bdevs_operational": 4, 00:16:20.159 "process": { 00:16:20.159 "type": "rebuild", 00:16:20.159 "target": "spare", 00:16:20.159 "progress": { 00:16:20.159 "blocks": 10240, 00:16:20.159 "percent": 15 00:16:20.159 } 00:16:20.159 }, 00:16:20.159 "base_bdevs_list": [ 00:16:20.159 { 00:16:20.159 "name": "spare", 00:16:20.159 "uuid": "551d1993-9f74-5227-892a-87d1c5403f42", 00:16:20.159 "is_configured": true, 00:16:20.159 "data_offset": 0, 00:16:20.159 "data_size": 65536 00:16:20.159 }, 00:16:20.159 { 00:16:20.159 "name": "BaseBdev2", 00:16:20.159 "uuid": "9be1cb30-334d-5bb2-9b96-75241eb1b873", 00:16:20.159 "is_configured": true, 00:16:20.159 "data_offset": 0, 00:16:20.159 "data_size": 65536 00:16:20.159 }, 00:16:20.159 { 00:16:20.159 "name": "BaseBdev3", 00:16:20.159 "uuid": "51a961e6-9248-522e-8493-1ec3393f14b1", 00:16:20.159 "is_configured": true, 00:16:20.159 "data_offset": 0, 00:16:20.159 "data_size": 65536 00:16:20.159 }, 00:16:20.159 { 00:16:20.159 "name": "BaseBdev4", 00:16:20.159 "uuid": "12a8d64b-fb7d-5bc0-84db-58246d62f197", 00:16:20.159 "is_configured": true, 00:16:20.159 "data_offset": 0, 00:16:20.159 "data_size": 65536 00:16:20.159 } 00:16:20.159 ] 00:16:20.159 }' 00:16:20.159 14:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:20.159 14:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:20.159 14:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:20.159 [2024-11-27 14:15:57.368669] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:20.159 14:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:20.159 14:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:20.159 14:15:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.159 14:15:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:20.159 [2024-11-27 14:15:57.420947] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:20.419 [2024-11-27 14:15:57.482208] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:20.419 [2024-11-27 14:15:57.594258] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:20.419 [2024-11-27 14:15:57.600119] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:20.419 [2024-11-27 14:15:57.600175] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:20.419 [2024-11-27 14:15:57.600223] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:20.419 [2024-11-27 14:15:57.632859] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:16:20.419 14:15:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.419 14:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:20.419 14:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:20.419 14:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:20.419 14:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:20.419 14:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:20.419 14:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:20.419 14:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:20.419 14:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:20.419 14:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:20.419 14:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:20.419 14:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.419 14:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.419 14:15:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.419 14:15:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:20.679 14:15:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.679 14:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:20.679 "name": "raid_bdev1", 00:16:20.679 "uuid": "02c8f69f-01c6-4a1f-967a-34c9ad7f7e3c", 00:16:20.679 "strip_size_kb": 0, 00:16:20.679 "state": "online", 00:16:20.679 "raid_level": "raid1", 00:16:20.679 "superblock": false, 00:16:20.679 "num_base_bdevs": 4, 00:16:20.679 "num_base_bdevs_discovered": 3, 00:16:20.679 "num_base_bdevs_operational": 3, 00:16:20.679 "base_bdevs_list": [ 00:16:20.679 { 00:16:20.679 "name": null, 00:16:20.679 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:20.679 "is_configured": false, 00:16:20.679 "data_offset": 0, 00:16:20.679 "data_size": 65536 00:16:20.679 }, 00:16:20.679 { 00:16:20.679 "name": "BaseBdev2", 00:16:20.679 "uuid": "9be1cb30-334d-5bb2-9b96-75241eb1b873", 00:16:20.679 "is_configured": true, 00:16:20.679 "data_offset": 0, 00:16:20.679 "data_size": 65536 00:16:20.679 }, 00:16:20.679 { 00:16:20.679 "name": "BaseBdev3", 00:16:20.679 "uuid": "51a961e6-9248-522e-8493-1ec3393f14b1", 00:16:20.679 "is_configured": true, 00:16:20.679 "data_offset": 0, 00:16:20.679 "data_size": 65536 00:16:20.679 }, 00:16:20.679 { 00:16:20.679 "name": "BaseBdev4", 00:16:20.679 "uuid": "12a8d64b-fb7d-5bc0-84db-58246d62f197", 00:16:20.679 "is_configured": true, 00:16:20.679 "data_offset": 0, 00:16:20.679 "data_size": 65536 00:16:20.679 } 00:16:20.679 ] 00:16:20.679 }' 00:16:20.679 14:15:57 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:20.679 14:15:57 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:20.939 107.00 IOPS, 321.00 MiB/s [2024-11-27T14:15:58.217Z] 14:15:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:20.939 14:15:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:20.939 14:15:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:20.939 14:15:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:20.939 14:15:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:20.939 14:15:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:20.939 14:15:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.939 14:15:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:20.939 14:15:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.199 14:15:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.199 14:15:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:21.199 "name": "raid_bdev1", 00:16:21.199 "uuid": "02c8f69f-01c6-4a1f-967a-34c9ad7f7e3c", 00:16:21.199 "strip_size_kb": 0, 00:16:21.199 "state": "online", 00:16:21.199 "raid_level": "raid1", 00:16:21.199 "superblock": false, 00:16:21.199 "num_base_bdevs": 4, 00:16:21.199 "num_base_bdevs_discovered": 3, 00:16:21.199 "num_base_bdevs_operational": 3, 00:16:21.199 "base_bdevs_list": [ 00:16:21.199 { 00:16:21.199 "name": null, 00:16:21.199 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:21.199 "is_configured": false, 00:16:21.199 "data_offset": 0, 00:16:21.199 "data_size": 65536 00:16:21.199 }, 00:16:21.199 { 00:16:21.199 "name": "BaseBdev2", 00:16:21.199 "uuid": "9be1cb30-334d-5bb2-9b96-75241eb1b873", 00:16:21.199 "is_configured": true, 00:16:21.199 "data_offset": 0, 00:16:21.199 "data_size": 65536 00:16:21.199 }, 00:16:21.199 { 00:16:21.199 "name": "BaseBdev3", 00:16:21.199 "uuid": "51a961e6-9248-522e-8493-1ec3393f14b1", 00:16:21.199 "is_configured": true, 00:16:21.199 "data_offset": 0, 00:16:21.199 "data_size": 65536 00:16:21.199 }, 00:16:21.199 { 00:16:21.199 "name": "BaseBdev4", 00:16:21.199 "uuid": "12a8d64b-fb7d-5bc0-84db-58246d62f197", 00:16:21.199 "is_configured": true, 00:16:21.199 "data_offset": 0, 00:16:21.199 "data_size": 65536 00:16:21.199 } 00:16:21.199 ] 00:16:21.199 }' 00:16:21.199 14:15:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:21.199 14:15:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:21.199 14:15:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:21.199 14:15:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:21.199 14:15:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:21.199 14:15:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.199 14:15:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:21.199 [2024-11-27 14:15:58.375237] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:21.199 14:15:58 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.199 14:15:58 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:21.199 [2024-11-27 14:15:58.455277] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:21.199 [2024-11-27 14:15:58.458161] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:21.458 [2024-11-27 14:15:58.587177] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:21.458 [2024-11-27 14:15:58.588963] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:21.717 120.33 IOPS, 361.00 MiB/s [2024-11-27T14:15:58.995Z] [2024-11-27 14:15:58.813168] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:21.717 [2024-11-27 14:15:58.813538] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:22.285 [2024-11-27 14:15:59.361478] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:22.285 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:22.285 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.285 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:22.285 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:22.285 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.285 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.285 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.285 14:15:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.285 14:15:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.285 14:15:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.285 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.285 "name": "raid_bdev1", 00:16:22.285 "uuid": "02c8f69f-01c6-4a1f-967a-34c9ad7f7e3c", 00:16:22.285 "strip_size_kb": 0, 00:16:22.285 "state": "online", 00:16:22.285 "raid_level": "raid1", 00:16:22.285 "superblock": false, 00:16:22.285 "num_base_bdevs": 4, 00:16:22.285 "num_base_bdevs_discovered": 4, 00:16:22.285 "num_base_bdevs_operational": 4, 00:16:22.285 "process": { 00:16:22.285 "type": "rebuild", 00:16:22.285 "target": "spare", 00:16:22.285 "progress": { 00:16:22.285 "blocks": 10240, 00:16:22.285 "percent": 15 00:16:22.285 } 00:16:22.285 }, 00:16:22.285 "base_bdevs_list": [ 00:16:22.285 { 00:16:22.285 "name": "spare", 00:16:22.285 "uuid": "551d1993-9f74-5227-892a-87d1c5403f42", 00:16:22.285 "is_configured": true, 00:16:22.285 "data_offset": 0, 00:16:22.285 "data_size": 65536 00:16:22.285 }, 00:16:22.285 { 00:16:22.285 "name": "BaseBdev2", 00:16:22.285 "uuid": "9be1cb30-334d-5bb2-9b96-75241eb1b873", 00:16:22.285 "is_configured": true, 00:16:22.285 "data_offset": 0, 00:16:22.285 "data_size": 65536 00:16:22.285 }, 00:16:22.285 { 00:16:22.285 "name": "BaseBdev3", 00:16:22.285 "uuid": "51a961e6-9248-522e-8493-1ec3393f14b1", 00:16:22.285 "is_configured": true, 00:16:22.285 "data_offset": 0, 00:16:22.285 "data_size": 65536 00:16:22.285 }, 00:16:22.285 { 00:16:22.285 "name": "BaseBdev4", 00:16:22.285 "uuid": "12a8d64b-fb7d-5bc0-84db-58246d62f197", 00:16:22.285 "is_configured": true, 00:16:22.285 "data_offset": 0, 00:16:22.285 "data_size": 65536 00:16:22.285 } 00:16:22.285 ] 00:16:22.285 }' 00:16:22.285 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.285 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:22.285 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.544 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:22.544 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:16:22.544 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:22.544 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:22.544 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:22.544 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:22.544 14:15:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.544 14:15:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.544 [2024-11-27 14:15:59.594030] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:22.544 [2024-11-27 14:15:59.633572] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:22.544 [2024-11-27 14:15:59.634334] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:22.544 [2024-11-27 14:15:59.745746] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:16:22.544 [2024-11-27 14:15:59.745858] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:16:22.544 [2024-11-27 14:15:59.748452] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:22.544 14:15:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.544 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:22.544 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:22.544 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:22.544 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.544 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:22.544 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:22.544 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.544 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.544 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.544 14:15:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.544 14:15:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.544 14:15:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.544 109.25 IOPS, 327.75 MiB/s [2024-11-27T14:15:59.822Z] 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.544 "name": "raid_bdev1", 00:16:22.544 "uuid": "02c8f69f-01c6-4a1f-967a-34c9ad7f7e3c", 00:16:22.544 "strip_size_kb": 0, 00:16:22.544 "state": "online", 00:16:22.544 "raid_level": "raid1", 00:16:22.544 "superblock": false, 00:16:22.544 "num_base_bdevs": 4, 00:16:22.544 "num_base_bdevs_discovered": 3, 00:16:22.544 "num_base_bdevs_operational": 3, 00:16:22.544 "process": { 00:16:22.544 "type": "rebuild", 00:16:22.544 "target": "spare", 00:16:22.544 "progress": { 00:16:22.544 "blocks": 14336, 00:16:22.544 "percent": 21 00:16:22.544 } 00:16:22.544 }, 00:16:22.544 "base_bdevs_list": [ 00:16:22.544 { 00:16:22.544 "name": "spare", 00:16:22.544 "uuid": "551d1993-9f74-5227-892a-87d1c5403f42", 00:16:22.544 "is_configured": true, 00:16:22.544 "data_offset": 0, 00:16:22.544 "data_size": 65536 00:16:22.544 }, 00:16:22.544 { 00:16:22.544 "name": null, 00:16:22.544 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.544 "is_configured": false, 00:16:22.544 "data_offset": 0, 00:16:22.544 "data_size": 65536 00:16:22.544 }, 00:16:22.544 { 00:16:22.544 "name": "BaseBdev3", 00:16:22.544 "uuid": "51a961e6-9248-522e-8493-1ec3393f14b1", 00:16:22.544 "is_configured": true, 00:16:22.545 "data_offset": 0, 00:16:22.545 "data_size": 65536 00:16:22.545 }, 00:16:22.545 { 00:16:22.545 "name": "BaseBdev4", 00:16:22.545 "uuid": "12a8d64b-fb7d-5bc0-84db-58246d62f197", 00:16:22.545 "is_configured": true, 00:16:22.545 "data_offset": 0, 00:16:22.545 "data_size": 65536 00:16:22.545 } 00:16:22.545 ] 00:16:22.545 }' 00:16:22.545 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.804 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:22.804 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.804 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:22.804 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@706 -- # local timeout=526 00:16:22.804 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:22.804 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:22.804 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:22.804 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:22.804 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:22.804 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:22.804 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:22.804 14:15:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.804 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.804 14:15:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:22.804 14:15:59 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.804 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:22.804 "name": "raid_bdev1", 00:16:22.804 "uuid": "02c8f69f-01c6-4a1f-967a-34c9ad7f7e3c", 00:16:22.804 "strip_size_kb": 0, 00:16:22.804 "state": "online", 00:16:22.804 "raid_level": "raid1", 00:16:22.804 "superblock": false, 00:16:22.804 "num_base_bdevs": 4, 00:16:22.804 "num_base_bdevs_discovered": 3, 00:16:22.804 "num_base_bdevs_operational": 3, 00:16:22.804 "process": { 00:16:22.804 "type": "rebuild", 00:16:22.804 "target": "spare", 00:16:22.804 "progress": { 00:16:22.804 "blocks": 14336, 00:16:22.804 "percent": 21 00:16:22.804 } 00:16:22.804 }, 00:16:22.804 "base_bdevs_list": [ 00:16:22.804 { 00:16:22.804 "name": "spare", 00:16:22.804 "uuid": "551d1993-9f74-5227-892a-87d1c5403f42", 00:16:22.804 "is_configured": true, 00:16:22.804 "data_offset": 0, 00:16:22.804 "data_size": 65536 00:16:22.804 }, 00:16:22.804 { 00:16:22.804 "name": null, 00:16:22.804 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:22.804 "is_configured": false, 00:16:22.804 "data_offset": 0, 00:16:22.804 "data_size": 65536 00:16:22.804 }, 00:16:22.804 { 00:16:22.804 "name": "BaseBdev3", 00:16:22.804 "uuid": "51a961e6-9248-522e-8493-1ec3393f14b1", 00:16:22.804 "is_configured": true, 00:16:22.804 "data_offset": 0, 00:16:22.804 "data_size": 65536 00:16:22.804 }, 00:16:22.804 { 00:16:22.804 "name": "BaseBdev4", 00:16:22.804 "uuid": "12a8d64b-fb7d-5bc0-84db-58246d62f197", 00:16:22.804 "is_configured": true, 00:16:22.804 "data_offset": 0, 00:16:22.804 "data_size": 65536 00:16:22.804 } 00:16:22.804 ] 00:16:22.804 }' 00:16:22.804 14:15:59 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:22.804 14:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:22.804 14:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:22.804 14:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:22.804 14:16:00 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:23.372 [2024-11-27 14:16:00.352302] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 22528 offset_begin: 18432 offset_end: 24576 00:16:23.631 [2024-11-27 14:16:00.792334] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 28672 offset_begin: 24576 offset_end: 30720 00:16:23.891 102.80 IOPS, 308.40 MiB/s [2024-11-27T14:16:01.169Z] 14:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:23.891 14:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:23.891 14:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:23.891 14:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:23.891 14:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:23.891 14:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:23.891 14:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:23.891 14:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:23.891 14:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.891 14:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:23.891 14:16:01 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.891 14:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:23.891 "name": "raid_bdev1", 00:16:23.891 "uuid": "02c8f69f-01c6-4a1f-967a-34c9ad7f7e3c", 00:16:23.891 "strip_size_kb": 0, 00:16:23.891 "state": "online", 00:16:23.891 "raid_level": "raid1", 00:16:23.891 "superblock": false, 00:16:23.891 "num_base_bdevs": 4, 00:16:23.891 "num_base_bdevs_discovered": 3, 00:16:23.891 "num_base_bdevs_operational": 3, 00:16:23.891 "process": { 00:16:23.891 "type": "rebuild", 00:16:23.891 "target": "spare", 00:16:23.891 "progress": { 00:16:23.891 "blocks": 30720, 00:16:23.891 "percent": 46 00:16:23.891 } 00:16:23.891 }, 00:16:23.891 "base_bdevs_list": [ 00:16:23.891 { 00:16:23.891 "name": "spare", 00:16:23.891 "uuid": "551d1993-9f74-5227-892a-87d1c5403f42", 00:16:23.891 "is_configured": true, 00:16:23.891 "data_offset": 0, 00:16:23.891 "data_size": 65536 00:16:23.891 }, 00:16:23.891 { 00:16:23.891 "name": null, 00:16:23.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:23.891 "is_configured": false, 00:16:23.891 "data_offset": 0, 00:16:23.891 "data_size": 65536 00:16:23.891 }, 00:16:23.891 { 00:16:23.891 "name": "BaseBdev3", 00:16:23.891 "uuid": "51a961e6-9248-522e-8493-1ec3393f14b1", 00:16:23.891 "is_configured": true, 00:16:23.891 "data_offset": 0, 00:16:23.891 "data_size": 65536 00:16:23.891 }, 00:16:23.891 { 00:16:23.891 "name": "BaseBdev4", 00:16:23.891 "uuid": "12a8d64b-fb7d-5bc0-84db-58246d62f197", 00:16:23.891 "is_configured": true, 00:16:23.891 "data_offset": 0, 00:16:23.891 "data_size": 65536 00:16:23.891 } 00:16:23.891 ] 00:16:23.891 }' 00:16:23.891 14:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:24.151 14:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:24.151 14:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:24.151 14:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:24.151 14:16:01 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:24.151 [2024-11-27 14:16:01.262535] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:24.783 92.83 IOPS, 278.50 MiB/s [2024-11-27T14:16:02.061Z] [2024-11-27 14:16:01.998930] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 47104 offset_begin: 43008 offset_end: 49152 00:16:25.040 14:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:25.040 14:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:25.040 14:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:25.040 14:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:25.040 14:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:25.040 14:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:25.040 14:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:25.040 14:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:25.040 14:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.040 14:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:25.040 14:16:02 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.040 14:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:25.040 "name": "raid_bdev1", 00:16:25.040 "uuid": "02c8f69f-01c6-4a1f-967a-34c9ad7f7e3c", 00:16:25.040 "strip_size_kb": 0, 00:16:25.040 "state": "online", 00:16:25.040 "raid_level": "raid1", 00:16:25.040 "superblock": false, 00:16:25.040 "num_base_bdevs": 4, 00:16:25.040 "num_base_bdevs_discovered": 3, 00:16:25.040 "num_base_bdevs_operational": 3, 00:16:25.040 "process": { 00:16:25.040 "type": "rebuild", 00:16:25.040 "target": "spare", 00:16:25.040 "progress": { 00:16:25.040 "blocks": 49152, 00:16:25.040 "percent": 75 00:16:25.040 } 00:16:25.040 }, 00:16:25.040 "base_bdevs_list": [ 00:16:25.040 { 00:16:25.040 "name": "spare", 00:16:25.040 "uuid": "551d1993-9f74-5227-892a-87d1c5403f42", 00:16:25.040 "is_configured": true, 00:16:25.040 "data_offset": 0, 00:16:25.040 "data_size": 65536 00:16:25.040 }, 00:16:25.040 { 00:16:25.040 "name": null, 00:16:25.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:25.040 "is_configured": false, 00:16:25.040 "data_offset": 0, 00:16:25.040 "data_size": 65536 00:16:25.040 }, 00:16:25.040 { 00:16:25.040 "name": "BaseBdev3", 00:16:25.040 "uuid": "51a961e6-9248-522e-8493-1ec3393f14b1", 00:16:25.040 "is_configured": true, 00:16:25.040 "data_offset": 0, 00:16:25.040 "data_size": 65536 00:16:25.040 }, 00:16:25.040 { 00:16:25.040 "name": "BaseBdev4", 00:16:25.040 "uuid": "12a8d64b-fb7d-5bc0-84db-58246d62f197", 00:16:25.040 "is_configured": true, 00:16:25.040 "data_offset": 0, 00:16:25.040 "data_size": 65536 00:16:25.040 } 00:16:25.040 ] 00:16:25.040 }' 00:16:25.040 14:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:25.299 14:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:25.299 14:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:25.299 14:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:25.299 14:16:02 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:25.299 [2024-11-27 14:16:02.456540] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 53248 offset_begin: 49152 offset_end: 55296 00:16:25.558 [2024-11-27 14:16:02.799018] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 59392 offset_begin: 55296 offset_end: 61440 00:16:26.126 84.57 IOPS, 253.71 MiB/s [2024-11-27T14:16:03.404Z] [2024-11-27 14:16:03.143701] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:26.126 [2024-11-27 14:16:03.251983] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:26.126 [2024-11-27 14:16:03.256648] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:26.385 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:26.385 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:26.385 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.385 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:26.385 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:26.385 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.385 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.385 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.385 14:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.385 14:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.385 14:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.385 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.385 "name": "raid_bdev1", 00:16:26.385 "uuid": "02c8f69f-01c6-4a1f-967a-34c9ad7f7e3c", 00:16:26.385 "strip_size_kb": 0, 00:16:26.385 "state": "online", 00:16:26.385 "raid_level": "raid1", 00:16:26.385 "superblock": false, 00:16:26.385 "num_base_bdevs": 4, 00:16:26.385 "num_base_bdevs_discovered": 3, 00:16:26.385 "num_base_bdevs_operational": 3, 00:16:26.385 "base_bdevs_list": [ 00:16:26.385 { 00:16:26.385 "name": "spare", 00:16:26.385 "uuid": "551d1993-9f74-5227-892a-87d1c5403f42", 00:16:26.385 "is_configured": true, 00:16:26.385 "data_offset": 0, 00:16:26.385 "data_size": 65536 00:16:26.385 }, 00:16:26.385 { 00:16:26.385 "name": null, 00:16:26.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.385 "is_configured": false, 00:16:26.385 "data_offset": 0, 00:16:26.385 "data_size": 65536 00:16:26.385 }, 00:16:26.385 { 00:16:26.385 "name": "BaseBdev3", 00:16:26.385 "uuid": "51a961e6-9248-522e-8493-1ec3393f14b1", 00:16:26.385 "is_configured": true, 00:16:26.385 "data_offset": 0, 00:16:26.385 "data_size": 65536 00:16:26.385 }, 00:16:26.385 { 00:16:26.385 "name": "BaseBdev4", 00:16:26.385 "uuid": "12a8d64b-fb7d-5bc0-84db-58246d62f197", 00:16:26.385 "is_configured": true, 00:16:26.385 "data_offset": 0, 00:16:26.385 "data_size": 65536 00:16:26.385 } 00:16:26.385 ] 00:16:26.385 }' 00:16:26.385 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.385 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:26.385 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.385 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:26.385 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@709 -- # break 00:16:26.385 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:26.385 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:26.385 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:26.385 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:26.385 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:26.385 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.385 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.385 14:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.385 14:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.385 14:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.385 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:26.385 "name": "raid_bdev1", 00:16:26.385 "uuid": "02c8f69f-01c6-4a1f-967a-34c9ad7f7e3c", 00:16:26.385 "strip_size_kb": 0, 00:16:26.385 "state": "online", 00:16:26.385 "raid_level": "raid1", 00:16:26.385 "superblock": false, 00:16:26.385 "num_base_bdevs": 4, 00:16:26.385 "num_base_bdevs_discovered": 3, 00:16:26.385 "num_base_bdevs_operational": 3, 00:16:26.385 "base_bdevs_list": [ 00:16:26.385 { 00:16:26.385 "name": "spare", 00:16:26.385 "uuid": "551d1993-9f74-5227-892a-87d1c5403f42", 00:16:26.385 "is_configured": true, 00:16:26.385 "data_offset": 0, 00:16:26.385 "data_size": 65536 00:16:26.385 }, 00:16:26.385 { 00:16:26.385 "name": null, 00:16:26.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.385 "is_configured": false, 00:16:26.385 "data_offset": 0, 00:16:26.385 "data_size": 65536 00:16:26.385 }, 00:16:26.385 { 00:16:26.385 "name": "BaseBdev3", 00:16:26.385 "uuid": "51a961e6-9248-522e-8493-1ec3393f14b1", 00:16:26.385 "is_configured": true, 00:16:26.385 "data_offset": 0, 00:16:26.385 "data_size": 65536 00:16:26.385 }, 00:16:26.385 { 00:16:26.385 "name": "BaseBdev4", 00:16:26.385 "uuid": "12a8d64b-fb7d-5bc0-84db-58246d62f197", 00:16:26.385 "is_configured": true, 00:16:26.385 "data_offset": 0, 00:16:26.385 "data_size": 65536 00:16:26.385 } 00:16:26.385 ] 00:16:26.385 }' 00:16:26.385 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:26.644 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:26.644 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:26.644 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:26.644 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:26.644 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:26.645 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:26.645 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:26.645 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:26.645 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:26.645 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:26.645 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:26.645 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:26.645 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:26.645 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:26.645 14:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.645 14:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:26.645 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.645 14:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.645 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:26.645 "name": "raid_bdev1", 00:16:26.645 "uuid": "02c8f69f-01c6-4a1f-967a-34c9ad7f7e3c", 00:16:26.645 "strip_size_kb": 0, 00:16:26.645 "state": "online", 00:16:26.645 "raid_level": "raid1", 00:16:26.645 "superblock": false, 00:16:26.645 "num_base_bdevs": 4, 00:16:26.645 "num_base_bdevs_discovered": 3, 00:16:26.645 "num_base_bdevs_operational": 3, 00:16:26.645 "base_bdevs_list": [ 00:16:26.645 { 00:16:26.645 "name": "spare", 00:16:26.645 "uuid": "551d1993-9f74-5227-892a-87d1c5403f42", 00:16:26.645 "is_configured": true, 00:16:26.645 "data_offset": 0, 00:16:26.645 "data_size": 65536 00:16:26.645 }, 00:16:26.645 { 00:16:26.645 "name": null, 00:16:26.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.645 "is_configured": false, 00:16:26.645 "data_offset": 0, 00:16:26.645 "data_size": 65536 00:16:26.645 }, 00:16:26.645 { 00:16:26.645 "name": "BaseBdev3", 00:16:26.645 "uuid": "51a961e6-9248-522e-8493-1ec3393f14b1", 00:16:26.645 "is_configured": true, 00:16:26.645 "data_offset": 0, 00:16:26.645 "data_size": 65536 00:16:26.645 }, 00:16:26.645 { 00:16:26.645 "name": "BaseBdev4", 00:16:26.645 "uuid": "12a8d64b-fb7d-5bc0-84db-58246d62f197", 00:16:26.645 "is_configured": true, 00:16:26.645 "data_offset": 0, 00:16:26.645 "data_size": 65536 00:16:26.645 } 00:16:26.645 ] 00:16:26.645 }' 00:16:26.645 14:16:03 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:26.645 14:16:03 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.214 78.00 IOPS, 234.00 MiB/s [2024-11-27T14:16:04.492Z] 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:27.214 14:16:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.214 14:16:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.214 [2024-11-27 14:16:04.273049] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:27.214 [2024-11-27 14:16:04.273086] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:27.214 00:16:27.214 Latency(us) 00:16:27.214 [2024-11-27T14:16:04.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:27.214 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:27.214 raid_bdev1 : 8.58 74.70 224.09 0.00 0.00 19073.16 296.03 123922.62 00:16:27.214 [2024-11-27T14:16:04.492Z] =================================================================================================================== 00:16:27.214 [2024-11-27T14:16:04.492Z] Total : 74.70 224.09 0.00 0.00 19073.16 296.03 123922.62 00:16:27.214 [2024-11-27 14:16:04.386498] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.214 [2024-11-27 14:16:04.386626] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:27.214 { 00:16:27.214 "results": [ 00:16:27.214 { 00:16:27.214 "job": "raid_bdev1", 00:16:27.214 "core_mask": "0x1", 00:16:27.214 "workload": "randrw", 00:16:27.214 "percentage": 50, 00:16:27.214 "status": "finished", 00:16:27.214 "queue_depth": 2, 00:16:27.214 "io_size": 3145728, 00:16:27.214 "runtime": 8.581524, 00:16:27.214 "iops": 74.69535714169184, 00:16:27.214 "mibps": 224.08607142507554, 00:16:27.214 "io_failed": 0, 00:16:27.214 "io_timeout": 0, 00:16:27.214 "avg_latency_us": 19073.163999432705, 00:16:27.214 "min_latency_us": 296.0290909090909, 00:16:27.214 "max_latency_us": 123922.61818181818 00:16:27.214 } 00:16:27.214 ], 00:16:27.214 "core_count": 1 00:16:27.214 } 00:16:27.214 [2024-11-27 14:16:04.387022] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:27.214 [2024-11-27 14:16:04.387049] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:27.214 14:16:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.214 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:27.214 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:27.214 14:16:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.214 14:16:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:27.214 14:16:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.214 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:27.214 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:27.214 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:27.214 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:27.214 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:27.214 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:27.214 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:27.214 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:27.214 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:27.214 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:27.214 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:27.214 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:27.214 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:27.783 /dev/nbd0 00:16:27.783 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:27.783 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:27.783 14:16:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:27.783 14:16:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:16:27.783 14:16:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:27.783 14:16:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:27.783 14:16:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:27.783 14:16:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:16:27.783 14:16:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:27.783 14:16:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:27.783 14:16:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:27.783 1+0 records in 00:16:27.783 1+0 records out 00:16:27.783 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416882 s, 9.8 MB/s 00:16:27.783 14:16:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:27.783 14:16:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:16:27.783 14:16:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:27.783 14:16:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:27.783 14:16:04 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:16:27.783 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:27.783 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:27.783 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:27.783 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:16:27.783 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@728 -- # continue 00:16:27.783 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:27.783 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:16:27.783 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:16:27.783 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:27.783 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:16:27.783 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:27.783 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:27.783 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:27.783 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:27.783 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:27.783 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:27.783 14:16:04 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:16:28.043 /dev/nbd1 00:16:28.043 14:16:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:28.043 14:16:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:28.043 14:16:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:28.043 14:16:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:16:28.043 14:16:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:28.043 14:16:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:28.043 14:16:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:28.043 14:16:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:16:28.043 14:16:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:28.043 14:16:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:28.043 14:16:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:28.043 1+0 records in 00:16:28.043 1+0 records out 00:16:28.043 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000580338 s, 7.1 MB/s 00:16:28.043 14:16:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:28.043 14:16:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:16:28.043 14:16:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:28.043 14:16:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:28.043 14:16:05 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:16:28.043 14:16:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:28.043 14:16:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:28.043 14:16:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:28.302 14:16:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:28.302 14:16:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:28.302 14:16:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:28.302 14:16:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:28.302 14:16:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:28.302 14:16:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:28.302 14:16:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:28.561 14:16:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:28.561 14:16:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:28.561 14:16:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:28.561 14:16:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:28.561 14:16:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:28.561 14:16:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:28.561 14:16:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:28.561 14:16:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:28.561 14:16:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:28.561 14:16:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:16:28.561 14:16:05 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:16:28.561 14:16:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:28.561 14:16:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:16:28.561 14:16:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:28.561 14:16:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:28.561 14:16:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:28.561 14:16:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@12 -- # local i 00:16:28.561 14:16:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:28.561 14:16:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:28.561 14:16:05 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:16:28.820 /dev/nbd1 00:16:28.820 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:28.820 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:28.820 14:16:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:28.820 14:16:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@873 -- # local i 00:16:28.820 14:16:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:28.820 14:16:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:28.820 14:16:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:28.820 14:16:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@877 -- # break 00:16:28.820 14:16:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:28.820 14:16:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:28.820 14:16:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:28.820 1+0 records in 00:16:28.820 1+0 records out 00:16:28.820 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00042324 s, 9.7 MB/s 00:16:28.820 14:16:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:28.820 14:16:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@890 -- # size=4096 00:16:28.820 14:16:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:28.820 14:16:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:28.820 14:16:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@893 -- # return 0 00:16:28.820 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:28.820 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:28.820 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@731 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:16:29.079 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:29.079 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:29.079 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:29.079 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:29.079 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:29.079 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:29.079 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:29.338 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:29.338 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:29.338 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:29.338 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:29.338 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:29.338 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:29.338 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:29.338 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:29.338 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:29.338 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:29.338 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:29.338 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:29.338 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@51 -- # local i 00:16:29.338 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:29.338 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:29.598 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:29.598 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:29.598 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:29.598 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:29.598 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:29.598 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:29.598 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@41 -- # break 00:16:29.598 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:29.598 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:16:29.598 14:16:06 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@784 -- # killprocess 78974 00:16:29.598 14:16:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@954 -- # '[' -z 78974 ']' 00:16:29.598 14:16:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@958 -- # kill -0 78974 00:16:29.598 14:16:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # uname 00:16:29.598 14:16:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:29.598 14:16:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78974 00:16:29.598 killing process with pid 78974 00:16:29.598 Received shutdown signal, test time was about 10.993926 seconds 00:16:29.598 00:16:29.598 Latency(us) 00:16:29.598 [2024-11-27T14:16:06.876Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:29.598 [2024-11-27T14:16:06.876Z] =================================================================================================================== 00:16:29.598 [2024-11-27T14:16:06.876Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:29.598 14:16:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:29.598 14:16:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:29.598 14:16:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78974' 00:16:29.598 14:16:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@973 -- # kill 78974 00:16:29.598 14:16:06 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@978 -- # wait 78974 00:16:29.598 [2024-11-27 14:16:06.777411] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:30.167 [2024-11-27 14:16:07.173589] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:31.105 14:16:08 bdev_raid.raid_rebuild_test_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:31.105 00:16:31.105 real 0m14.762s 00:16:31.105 user 0m19.542s 00:16:31.105 sys 0m1.834s 00:16:31.105 14:16:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:31.105 14:16:08 bdev_raid.raid_rebuild_test_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.105 ************************************ 00:16:31.105 END TEST raid_rebuild_test_io 00:16:31.105 ************************************ 00:16:31.105 14:16:08 bdev_raid -- bdev/bdev_raid.sh@981 -- # run_test raid_rebuild_test_sb_io raid_rebuild_test raid1 4 true true true 00:16:31.105 14:16:08 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:16:31.105 14:16:08 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:31.105 14:16:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:31.364 ************************************ 00:16:31.364 START TEST raid_rebuild_test_sb_io 00:16:31.364 ************************************ 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 4 true true true 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@572 -- # local background_io=true 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@573 -- # local verify=true 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@576 -- # local strip_size 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@577 -- # local create_arg 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@579 -- # local data_offset 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@597 -- # raid_pid=79394 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@598 -- # waitforlisten 79394 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@835 -- # '[' -z 79394 ']' 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:31.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:31.364 14:16:08 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:31.364 [2024-11-27 14:16:08.506865] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:16:31.364 [2024-11-27 14:16:08.507254] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --matchI/O size of 3145728 is greater than zero copy threshold (65536). 00:16:31.364 Zero copy mechanism will not be used. 00:16:31.364 -allocations --file-prefix=spdk_pid79394 ] 00:16:31.623 [2024-11-27 14:16:08.697579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:31.623 [2024-11-27 14:16:08.834285] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:31.882 [2024-11-27 14:16:09.051401] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:31.882 [2024-11-27 14:16:09.051480] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:32.448 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:32.448 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@868 -- # return 0 00:16:32.448 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:32.448 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:32.448 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.448 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.448 BaseBdev1_malloc 00:16:32.448 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.448 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:32.448 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.448 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.448 [2024-11-27 14:16:09.574103] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:32.448 [2024-11-27 14:16:09.574182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.448 [2024-11-27 14:16:09.574214] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:16:32.448 [2024-11-27 14:16:09.574232] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.448 [2024-11-27 14:16:09.577397] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.448 [2024-11-27 14:16:09.577450] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:32.448 BaseBdev1 00:16:32.448 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.448 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:32.448 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:32.448 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.448 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.448 BaseBdev2_malloc 00:16:32.448 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.448 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:16:32.448 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.448 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.448 [2024-11-27 14:16:09.631408] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:16:32.448 [2024-11-27 14:16:09.631616] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.448 [2024-11-27 14:16:09.631660] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:16:32.448 [2024-11-27 14:16:09.631679] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.448 [2024-11-27 14:16:09.634459] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.448 [2024-11-27 14:16:09.634508] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:32.448 BaseBdev2 00:16:32.448 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.448 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:32.448 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:32.448 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.448 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.448 BaseBdev3_malloc 00:16:32.448 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.448 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:16:32.448 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.448 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.448 [2024-11-27 14:16:09.698981] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:16:32.448 [2024-11-27 14:16:09.699053] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.448 [2024-11-27 14:16:09.699085] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:16:32.448 [2024-11-27 14:16:09.699112] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.448 [2024-11-27 14:16:09.701926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.448 [2024-11-27 14:16:09.701994] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:32.448 BaseBdev3 00:16:32.448 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.448 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:16:32.448 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:32.448 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.448 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.708 BaseBdev4_malloc 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.708 [2024-11-27 14:16:09.755761] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:16:32.708 [2024-11-27 14:16:09.755997] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.708 [2024-11-27 14:16:09.756038] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:16:32.708 [2024-11-27 14:16:09.756063] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.708 [2024-11-27 14:16:09.758911] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.708 [2024-11-27 14:16:09.758975] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:32.708 BaseBdev4 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.708 spare_malloc 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.708 spare_delay 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.708 [2024-11-27 14:16:09.816472] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:32.708 [2024-11-27 14:16:09.816540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.708 [2024-11-27 14:16:09.816568] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:16:32.708 [2024-11-27 14:16:09.816586] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.708 [2024-11-27 14:16:09.819517] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.708 [2024-11-27 14:16:09.819684] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:32.708 spare 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.708 [2024-11-27 14:16:09.828578] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:32.708 [2024-11-27 14:16:09.831129] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:32.708 [2024-11-27 14:16:09.831243] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:32.708 [2024-11-27 14:16:09.831335] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:32.708 [2024-11-27 14:16:09.831560] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:16:32.708 [2024-11-27 14:16:09.831583] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:32.708 [2024-11-27 14:16:09.831959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:16:32.708 [2024-11-27 14:16:09.832200] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:16:32.708 [2024-11-27 14:16:09.832217] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:16:32.708 [2024-11-27 14:16:09.832425] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:32.708 "name": "raid_bdev1", 00:16:32.708 "uuid": "0c8764aa-b886-495b-b7cd-88652764e86d", 00:16:32.708 "strip_size_kb": 0, 00:16:32.708 "state": "online", 00:16:32.708 "raid_level": "raid1", 00:16:32.708 "superblock": true, 00:16:32.708 "num_base_bdevs": 4, 00:16:32.708 "num_base_bdevs_discovered": 4, 00:16:32.708 "num_base_bdevs_operational": 4, 00:16:32.708 "base_bdevs_list": [ 00:16:32.708 { 00:16:32.708 "name": "BaseBdev1", 00:16:32.708 "uuid": "b7a7a8e4-298d-5f2d-b063-71878fb40343", 00:16:32.708 "is_configured": true, 00:16:32.708 "data_offset": 2048, 00:16:32.708 "data_size": 63488 00:16:32.708 }, 00:16:32.708 { 00:16:32.708 "name": "BaseBdev2", 00:16:32.708 "uuid": "70c87ceb-dcb6-5bc6-ae58-8beeef5f7df2", 00:16:32.708 "is_configured": true, 00:16:32.708 "data_offset": 2048, 00:16:32.708 "data_size": 63488 00:16:32.708 }, 00:16:32.708 { 00:16:32.708 "name": "BaseBdev3", 00:16:32.708 "uuid": "8a04e0e1-8904-5e2a-9703-f151f4de9316", 00:16:32.708 "is_configured": true, 00:16:32.708 "data_offset": 2048, 00:16:32.708 "data_size": 63488 00:16:32.708 }, 00:16:32.708 { 00:16:32.708 "name": "BaseBdev4", 00:16:32.708 "uuid": "de4889f5-7e58-5ec6-9d4d-ba35a7cecf66", 00:16:32.708 "is_configured": true, 00:16:32.708 "data_offset": 2048, 00:16:32.708 "data_size": 63488 00:16:32.708 } 00:16:32.708 ] 00:16:32.708 }' 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:32.708 14:16:09 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:33.277 [2024-11-27 14:16:10.377249] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=63488 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@621 -- # '[' true = true ']' 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@623 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:33.277 [2024-11-27 14:16:10.480738] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:33.277 "name": "raid_bdev1", 00:16:33.277 "uuid": "0c8764aa-b886-495b-b7cd-88652764e86d", 00:16:33.277 "strip_size_kb": 0, 00:16:33.277 "state": "online", 00:16:33.277 "raid_level": "raid1", 00:16:33.277 "superblock": true, 00:16:33.277 "num_base_bdevs": 4, 00:16:33.277 "num_base_bdevs_discovered": 3, 00:16:33.277 "num_base_bdevs_operational": 3, 00:16:33.277 "base_bdevs_list": [ 00:16:33.277 { 00:16:33.277 "name": null, 00:16:33.277 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:33.277 "is_configured": false, 00:16:33.277 "data_offset": 0, 00:16:33.277 "data_size": 63488 00:16:33.277 }, 00:16:33.277 { 00:16:33.277 "name": "BaseBdev2", 00:16:33.277 "uuid": "70c87ceb-dcb6-5bc6-ae58-8beeef5f7df2", 00:16:33.277 "is_configured": true, 00:16:33.277 "data_offset": 2048, 00:16:33.277 "data_size": 63488 00:16:33.277 }, 00:16:33.277 { 00:16:33.277 "name": "BaseBdev3", 00:16:33.277 "uuid": "8a04e0e1-8904-5e2a-9703-f151f4de9316", 00:16:33.277 "is_configured": true, 00:16:33.277 "data_offset": 2048, 00:16:33.277 "data_size": 63488 00:16:33.277 }, 00:16:33.277 { 00:16:33.277 "name": "BaseBdev4", 00:16:33.277 "uuid": "de4889f5-7e58-5ec6-9d4d-ba35a7cecf66", 00:16:33.277 "is_configured": true, 00:16:33.277 "data_offset": 2048, 00:16:33.277 "data_size": 63488 00:16:33.277 } 00:16:33.277 ] 00:16:33.277 }' 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:33.277 14:16:10 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:33.536 [2024-11-27 14:16:10.609134] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:16:33.536 I/O size of 3145728 is greater than zero copy threshold (65536). 00:16:33.536 Zero copy mechanism will not be used. 00:16:33.536 Running I/O for 60 seconds... 00:16:33.796 14:16:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:33.796 14:16:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.796 14:16:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:33.796 [2024-11-27 14:16:11.026479] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:34.055 14:16:11 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.055 14:16:11 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@647 -- # sleep 1 00:16:34.055 [2024-11-27 14:16:11.120713] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000062f0 00:16:34.055 [2024-11-27 14:16:11.123655] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:34.055 [2024-11-27 14:16:11.255760] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:34.055 [2024-11-27 14:16:11.257427] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:34.318 [2024-11-27 14:16:11.490180] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:34.318 [2024-11-27 14:16:11.490816] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:34.577 133.00 IOPS, 399.00 MiB/s [2024-11-27T14:16:11.855Z] [2024-11-27 14:16:11.852786] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:34.836 [2024-11-27 14:16:11.854568] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:34.836 [2024-11-27 14:16:12.085478] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:34.836 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:34.836 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:34.836 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:34.836 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:34.836 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:34.836 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:34.836 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:34.836 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.836 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.095 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.095 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.095 "name": "raid_bdev1", 00:16:35.095 "uuid": "0c8764aa-b886-495b-b7cd-88652764e86d", 00:16:35.095 "strip_size_kb": 0, 00:16:35.095 "state": "online", 00:16:35.095 "raid_level": "raid1", 00:16:35.095 "superblock": true, 00:16:35.095 "num_base_bdevs": 4, 00:16:35.095 "num_base_bdevs_discovered": 4, 00:16:35.095 "num_base_bdevs_operational": 4, 00:16:35.095 "process": { 00:16:35.095 "type": "rebuild", 00:16:35.095 "target": "spare", 00:16:35.095 "progress": { 00:16:35.095 "blocks": 10240, 00:16:35.095 "percent": 16 00:16:35.095 } 00:16:35.095 }, 00:16:35.095 "base_bdevs_list": [ 00:16:35.095 { 00:16:35.095 "name": "spare", 00:16:35.095 "uuid": "cf37fb7d-e8a8-5c01-938d-6071c64abe1a", 00:16:35.095 "is_configured": true, 00:16:35.095 "data_offset": 2048, 00:16:35.095 "data_size": 63488 00:16:35.095 }, 00:16:35.095 { 00:16:35.095 "name": "BaseBdev2", 00:16:35.095 "uuid": "70c87ceb-dcb6-5bc6-ae58-8beeef5f7df2", 00:16:35.095 "is_configured": true, 00:16:35.095 "data_offset": 2048, 00:16:35.095 "data_size": 63488 00:16:35.095 }, 00:16:35.095 { 00:16:35.095 "name": "BaseBdev3", 00:16:35.095 "uuid": "8a04e0e1-8904-5e2a-9703-f151f4de9316", 00:16:35.095 "is_configured": true, 00:16:35.095 "data_offset": 2048, 00:16:35.095 "data_size": 63488 00:16:35.095 }, 00:16:35.095 { 00:16:35.095 "name": "BaseBdev4", 00:16:35.095 "uuid": "de4889f5-7e58-5ec6-9d4d-ba35a7cecf66", 00:16:35.095 "is_configured": true, 00:16:35.095 "data_offset": 2048, 00:16:35.095 "data_size": 63488 00:16:35.095 } 00:16:35.095 ] 00:16:35.095 }' 00:16:35.095 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.095 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:35.095 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.095 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:35.095 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:35.095 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.095 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.095 [2024-11-27 14:16:12.276646] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:35.354 [2024-11-27 14:16:12.432050] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:35.354 [2024-11-27 14:16:12.445578] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:35.354 [2024-11-27 14:16:12.445653] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:35.354 [2024-11-27 14:16:12.445671] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:35.354 [2024-11-27 14:16:12.475828] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x60d000006220 00:16:35.354 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.354 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:35.354 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:35.355 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:35.355 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:35.355 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:35.355 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:35.355 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:35.355 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:35.355 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:35.355 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:35.355 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.355 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.355 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.355 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.355 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.355 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:35.355 "name": "raid_bdev1", 00:16:35.355 "uuid": "0c8764aa-b886-495b-b7cd-88652764e86d", 00:16:35.355 "strip_size_kb": 0, 00:16:35.355 "state": "online", 00:16:35.355 "raid_level": "raid1", 00:16:35.355 "superblock": true, 00:16:35.355 "num_base_bdevs": 4, 00:16:35.355 "num_base_bdevs_discovered": 3, 00:16:35.355 "num_base_bdevs_operational": 3, 00:16:35.355 "base_bdevs_list": [ 00:16:35.355 { 00:16:35.355 "name": null, 00:16:35.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.355 "is_configured": false, 00:16:35.355 "data_offset": 0, 00:16:35.355 "data_size": 63488 00:16:35.355 }, 00:16:35.355 { 00:16:35.355 "name": "BaseBdev2", 00:16:35.355 "uuid": "70c87ceb-dcb6-5bc6-ae58-8beeef5f7df2", 00:16:35.355 "is_configured": true, 00:16:35.355 "data_offset": 2048, 00:16:35.355 "data_size": 63488 00:16:35.355 }, 00:16:35.355 { 00:16:35.355 "name": "BaseBdev3", 00:16:35.355 "uuid": "8a04e0e1-8904-5e2a-9703-f151f4de9316", 00:16:35.355 "is_configured": true, 00:16:35.355 "data_offset": 2048, 00:16:35.355 "data_size": 63488 00:16:35.355 }, 00:16:35.355 { 00:16:35.355 "name": "BaseBdev4", 00:16:35.355 "uuid": "de4889f5-7e58-5ec6-9d4d-ba35a7cecf66", 00:16:35.355 "is_configured": true, 00:16:35.355 "data_offset": 2048, 00:16:35.355 "data_size": 63488 00:16:35.355 } 00:16:35.355 ] 00:16:35.355 }' 00:16:35.355 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:35.355 14:16:12 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.922 99.00 IOPS, 297.00 MiB/s [2024-11-27T14:16:13.200Z] 14:16:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:35.922 14:16:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:35.923 14:16:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:35.923 14:16:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:35.923 14:16:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:35.923 14:16:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:35.923 14:16:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:35.923 14:16:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.923 14:16:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.923 14:16:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.923 14:16:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:35.923 "name": "raid_bdev1", 00:16:35.923 "uuid": "0c8764aa-b886-495b-b7cd-88652764e86d", 00:16:35.923 "strip_size_kb": 0, 00:16:35.923 "state": "online", 00:16:35.923 "raid_level": "raid1", 00:16:35.923 "superblock": true, 00:16:35.923 "num_base_bdevs": 4, 00:16:35.923 "num_base_bdevs_discovered": 3, 00:16:35.923 "num_base_bdevs_operational": 3, 00:16:35.923 "base_bdevs_list": [ 00:16:35.923 { 00:16:35.923 "name": null, 00:16:35.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:35.923 "is_configured": false, 00:16:35.923 "data_offset": 0, 00:16:35.923 "data_size": 63488 00:16:35.923 }, 00:16:35.923 { 00:16:35.923 "name": "BaseBdev2", 00:16:35.923 "uuid": "70c87ceb-dcb6-5bc6-ae58-8beeef5f7df2", 00:16:35.923 "is_configured": true, 00:16:35.923 "data_offset": 2048, 00:16:35.923 "data_size": 63488 00:16:35.923 }, 00:16:35.923 { 00:16:35.923 "name": "BaseBdev3", 00:16:35.923 "uuid": "8a04e0e1-8904-5e2a-9703-f151f4de9316", 00:16:35.923 "is_configured": true, 00:16:35.923 "data_offset": 2048, 00:16:35.923 "data_size": 63488 00:16:35.923 }, 00:16:35.923 { 00:16:35.923 "name": "BaseBdev4", 00:16:35.923 "uuid": "de4889f5-7e58-5ec6-9d4d-ba35a7cecf66", 00:16:35.923 "is_configured": true, 00:16:35.923 "data_offset": 2048, 00:16:35.923 "data_size": 63488 00:16:35.923 } 00:16:35.923 ] 00:16:35.923 }' 00:16:35.923 14:16:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:35.923 14:16:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:35.923 14:16:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:35.923 14:16:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:35.923 14:16:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:35.923 14:16:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.923 14:16:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:35.923 [2024-11-27 14:16:13.171448] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:36.181 14:16:13 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.182 14:16:13 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@663 -- # sleep 1 00:16:36.182 [2024-11-27 14:16:13.234471] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:16:36.182 [2024-11-27 14:16:13.237151] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:36.182 [2024-11-27 14:16:13.338655] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:36.182 [2024-11-27 14:16:13.339448] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 2048 offset_begin: 0 offset_end: 6144 00:16:36.440 [2024-11-27 14:16:13.469108] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:36.440 [2024-11-27 14:16:13.469488] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 4096 offset_begin: 0 offset_end: 6144 00:16:36.699 116.33 IOPS, 349.00 MiB/s [2024-11-27T14:16:13.977Z] [2024-11-27 14:16:13.723405] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:36.699 [2024-11-27 14:16:13.724930] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 8192 offset_begin: 6144 offset_end: 12288 00:16:36.699 [2024-11-27 14:16:13.940127] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 10240 offset_begin: 6144 offset_end: 12288 00:16:36.988 [2024-11-27 14:16:14.213672] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 14336 offset_begin: 12288 offset_end: 18432 00:16:36.988 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:36.988 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:36.988 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:36.988 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:36.988 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:36.988 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:36.988 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:36.988 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.988 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:36.988 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.247 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.247 "name": "raid_bdev1", 00:16:37.247 "uuid": "0c8764aa-b886-495b-b7cd-88652764e86d", 00:16:37.247 "strip_size_kb": 0, 00:16:37.247 "state": "online", 00:16:37.247 "raid_level": "raid1", 00:16:37.247 "superblock": true, 00:16:37.247 "num_base_bdevs": 4, 00:16:37.247 "num_base_bdevs_discovered": 4, 00:16:37.247 "num_base_bdevs_operational": 4, 00:16:37.247 "process": { 00:16:37.247 "type": "rebuild", 00:16:37.247 "target": "spare", 00:16:37.247 "progress": { 00:16:37.247 "blocks": 14336, 00:16:37.247 "percent": 22 00:16:37.247 } 00:16:37.247 }, 00:16:37.247 "base_bdevs_list": [ 00:16:37.247 { 00:16:37.247 "name": "spare", 00:16:37.247 "uuid": "cf37fb7d-e8a8-5c01-938d-6071c64abe1a", 00:16:37.247 "is_configured": true, 00:16:37.247 "data_offset": 2048, 00:16:37.247 "data_size": 63488 00:16:37.247 }, 00:16:37.247 { 00:16:37.247 "name": "BaseBdev2", 00:16:37.247 "uuid": "70c87ceb-dcb6-5bc6-ae58-8beeef5f7df2", 00:16:37.247 "is_configured": true, 00:16:37.247 "data_offset": 2048, 00:16:37.247 "data_size": 63488 00:16:37.247 }, 00:16:37.247 { 00:16:37.247 "name": "BaseBdev3", 00:16:37.247 "uuid": "8a04e0e1-8904-5e2a-9703-f151f4de9316", 00:16:37.247 "is_configured": true, 00:16:37.247 "data_offset": 2048, 00:16:37.247 "data_size": 63488 00:16:37.247 }, 00:16:37.247 { 00:16:37.247 "name": "BaseBdev4", 00:16:37.247 "uuid": "de4889f5-7e58-5ec6-9d4d-ba35a7cecf66", 00:16:37.247 "is_configured": true, 00:16:37.247 "data_offset": 2048, 00:16:37.247 "data_size": 63488 00:16:37.247 } 00:16:37.247 ] 00:16:37.247 }' 00:16:37.247 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.247 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.247 [2024-11-27 14:16:14.345094] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 16384 offset_begin: 12288 offset_end: 18432 00:16:37.247 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.247 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.247 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:16:37.247 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:16:37.247 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:16:37.247 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:16:37.247 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:16:37.247 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@693 -- # '[' 4 -gt 2 ']' 00:16:37.247 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@695 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:16:37.247 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.247 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:37.247 [2024-11-27 14:16:14.410759] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:37.507 105.50 IOPS, 316.50 MiB/s [2024-11-27T14:16:14.785Z] [2024-11-27 14:16:14.701969] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d000006220 00:16:37.507 [2024-11-27 14:16:14.702045] bdev_raid.c:1974:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 1 raid_ch: 0x60d0000063c0 00:16:37.507 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.507 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@698 -- # base_bdevs[1]= 00:16:37.507 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@699 -- # (( num_base_bdevs_operational-- )) 00:16:37.507 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@702 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:37.507 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.507 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:37.507 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:37.507 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.507 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.507 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.507 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:37.507 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.507 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.507 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.507 "name": "raid_bdev1", 00:16:37.507 "uuid": "0c8764aa-b886-495b-b7cd-88652764e86d", 00:16:37.507 "strip_size_kb": 0, 00:16:37.507 "state": "online", 00:16:37.507 "raid_level": "raid1", 00:16:37.507 "superblock": true, 00:16:37.507 "num_base_bdevs": 4, 00:16:37.507 "num_base_bdevs_discovered": 3, 00:16:37.507 "num_base_bdevs_operational": 3, 00:16:37.507 "process": { 00:16:37.507 "type": "rebuild", 00:16:37.507 "target": "spare", 00:16:37.507 "progress": { 00:16:37.507 "blocks": 18432, 00:16:37.507 "percent": 29 00:16:37.507 } 00:16:37.507 }, 00:16:37.507 "base_bdevs_list": [ 00:16:37.507 { 00:16:37.507 "name": "spare", 00:16:37.507 "uuid": "cf37fb7d-e8a8-5c01-938d-6071c64abe1a", 00:16:37.507 "is_configured": true, 00:16:37.507 "data_offset": 2048, 00:16:37.507 "data_size": 63488 00:16:37.507 }, 00:16:37.507 { 00:16:37.507 "name": null, 00:16:37.507 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.507 "is_configured": false, 00:16:37.507 "data_offset": 0, 00:16:37.507 "data_size": 63488 00:16:37.507 }, 00:16:37.507 { 00:16:37.507 "name": "BaseBdev3", 00:16:37.507 "uuid": "8a04e0e1-8904-5e2a-9703-f151f4de9316", 00:16:37.507 "is_configured": true, 00:16:37.507 "data_offset": 2048, 00:16:37.507 "data_size": 63488 00:16:37.507 }, 00:16:37.507 { 00:16:37.507 "name": "BaseBdev4", 00:16:37.507 "uuid": "de4889f5-7e58-5ec6-9d4d-ba35a7cecf66", 00:16:37.507 "is_configured": true, 00:16:37.507 "data_offset": 2048, 00:16:37.507 "data_size": 63488 00:16:37.507 } 00:16:37.507 ] 00:16:37.507 }' 00:16:37.507 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.767 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.767 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.767 [2024-11-27 14:16:14.845739] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 20480 offset_begin: 18432 offset_end: 24576 00:16:37.767 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.767 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@706 -- # local timeout=541 00:16:37.767 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:37.767 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:37.767 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:37.767 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:37.767 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:37.767 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:37.767 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.767 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:37.767 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.767 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:37.767 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.767 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:37.767 "name": "raid_bdev1", 00:16:37.767 "uuid": "0c8764aa-b886-495b-b7cd-88652764e86d", 00:16:37.767 "strip_size_kb": 0, 00:16:37.767 "state": "online", 00:16:37.767 "raid_level": "raid1", 00:16:37.767 "superblock": true, 00:16:37.767 "num_base_bdevs": 4, 00:16:37.767 "num_base_bdevs_discovered": 3, 00:16:37.767 "num_base_bdevs_operational": 3, 00:16:37.767 "process": { 00:16:37.767 "type": "rebuild", 00:16:37.767 "target": "spare", 00:16:37.767 "progress": { 00:16:37.767 "blocks": 20480, 00:16:37.767 "percent": 32 00:16:37.767 } 00:16:37.767 }, 00:16:37.767 "base_bdevs_list": [ 00:16:37.767 { 00:16:37.767 "name": "spare", 00:16:37.767 "uuid": "cf37fb7d-e8a8-5c01-938d-6071c64abe1a", 00:16:37.767 "is_configured": true, 00:16:37.767 "data_offset": 2048, 00:16:37.767 "data_size": 63488 00:16:37.767 }, 00:16:37.767 { 00:16:37.767 "name": null, 00:16:37.767 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:37.767 "is_configured": false, 00:16:37.767 "data_offset": 0, 00:16:37.767 "data_size": 63488 00:16:37.767 }, 00:16:37.767 { 00:16:37.767 "name": "BaseBdev3", 00:16:37.767 "uuid": "8a04e0e1-8904-5e2a-9703-f151f4de9316", 00:16:37.767 "is_configured": true, 00:16:37.767 "data_offset": 2048, 00:16:37.767 "data_size": 63488 00:16:37.767 }, 00:16:37.767 { 00:16:37.767 "name": "BaseBdev4", 00:16:37.767 "uuid": "de4889f5-7e58-5ec6-9d4d-ba35a7cecf66", 00:16:37.767 "is_configured": true, 00:16:37.767 "data_offset": 2048, 00:16:37.767 "data_size": 63488 00:16:37.767 } 00:16:37.767 ] 00:16:37.767 }' 00:16:37.767 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:37.767 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:37.767 14:16:14 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:37.767 14:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:37.767 14:16:15 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:38.027 [2024-11-27 14:16:15.260474] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 26624 offset_begin: 24576 offset_end: 30720 00:16:38.594 94.40 IOPS, 283.20 MiB/s [2024-11-27T14:16:15.872Z] [2024-11-27 14:16:15.656900] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 32768 offset_begin: 30720 offset_end: 36864 00:16:38.853 [2024-11-27 14:16:15.909896] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 34816 offset_begin: 30720 offset_end: 36864 00:16:38.853 14:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:38.853 14:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:38.853 14:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:38.853 14:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:38.853 14:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:38.853 14:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:38.853 14:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:38.853 14:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:38.853 14:16:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.853 14:16:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:38.853 14:16:16 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.853 14:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:38.853 "name": "raid_bdev1", 00:16:38.853 "uuid": "0c8764aa-b886-495b-b7cd-88652764e86d", 00:16:38.853 "strip_size_kb": 0, 00:16:38.853 "state": "online", 00:16:38.853 "raid_level": "raid1", 00:16:38.853 "superblock": true, 00:16:38.853 "num_base_bdevs": 4, 00:16:38.853 "num_base_bdevs_discovered": 3, 00:16:38.853 "num_base_bdevs_operational": 3, 00:16:38.853 "process": { 00:16:38.853 "type": "rebuild", 00:16:38.853 "target": "spare", 00:16:38.853 "progress": { 00:16:38.853 "blocks": 36864, 00:16:38.853 "percent": 58 00:16:38.853 } 00:16:38.853 }, 00:16:38.853 "base_bdevs_list": [ 00:16:38.853 { 00:16:38.853 "name": "spare", 00:16:38.853 "uuid": "cf37fb7d-e8a8-5c01-938d-6071c64abe1a", 00:16:38.853 "is_configured": true, 00:16:38.853 "data_offset": 2048, 00:16:38.853 "data_size": 63488 00:16:38.853 }, 00:16:38.853 { 00:16:38.853 "name": null, 00:16:38.853 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:38.853 "is_configured": false, 00:16:38.853 "data_offset": 0, 00:16:38.853 "data_size": 63488 00:16:38.853 }, 00:16:38.853 { 00:16:38.853 "name": "BaseBdev3", 00:16:38.853 "uuid": "8a04e0e1-8904-5e2a-9703-f151f4de9316", 00:16:38.853 "is_configured": true, 00:16:38.853 "data_offset": 2048, 00:16:38.853 "data_size": 63488 00:16:38.853 }, 00:16:38.853 { 00:16:38.853 "name": "BaseBdev4", 00:16:38.853 "uuid": "de4889f5-7e58-5ec6-9d4d-ba35a7cecf66", 00:16:38.853 "is_configured": true, 00:16:38.853 "data_offset": 2048, 00:16:38.853 "data_size": 63488 00:16:38.853 } 00:16:38.853 ] 00:16:38.853 }' 00:16:38.853 14:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:39.137 14:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:39.137 14:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:39.137 14:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:39.137 14:16:16 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:39.137 [2024-11-27 14:16:16.258521] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 40960 offset_begin: 36864 offset_end: 43008 00:16:39.673 84.50 IOPS, 253.50 MiB/s [2024-11-27T14:16:16.951Z] [2024-11-27 14:16:16.940659] bdev_raid.c: 859:raid_bdev_submit_rw_request: *DEBUG*: split: process_offset: 51200 offset_begin: 49152 offset_end: 55296 00:16:39.933 14:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:39.933 14:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:39.933 14:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:39.933 14:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:39.933 14:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:39.933 14:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:39.933 14:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:39.933 14:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.933 14:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.933 14:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:40.193 14:16:17 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.193 14:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:40.193 "name": "raid_bdev1", 00:16:40.193 "uuid": "0c8764aa-b886-495b-b7cd-88652764e86d", 00:16:40.193 "strip_size_kb": 0, 00:16:40.193 "state": "online", 00:16:40.193 "raid_level": "raid1", 00:16:40.193 "superblock": true, 00:16:40.193 "num_base_bdevs": 4, 00:16:40.193 "num_base_bdevs_discovered": 3, 00:16:40.193 "num_base_bdevs_operational": 3, 00:16:40.193 "process": { 00:16:40.193 "type": "rebuild", 00:16:40.193 "target": "spare", 00:16:40.193 "progress": { 00:16:40.193 "blocks": 55296, 00:16:40.193 "percent": 87 00:16:40.193 } 00:16:40.193 }, 00:16:40.193 "base_bdevs_list": [ 00:16:40.193 { 00:16:40.193 "name": "spare", 00:16:40.193 "uuid": "cf37fb7d-e8a8-5c01-938d-6071c64abe1a", 00:16:40.193 "is_configured": true, 00:16:40.193 "data_offset": 2048, 00:16:40.193 "data_size": 63488 00:16:40.193 }, 00:16:40.193 { 00:16:40.193 "name": null, 00:16:40.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:40.193 "is_configured": false, 00:16:40.193 "data_offset": 0, 00:16:40.193 "data_size": 63488 00:16:40.193 }, 00:16:40.193 { 00:16:40.193 "name": "BaseBdev3", 00:16:40.193 "uuid": "8a04e0e1-8904-5e2a-9703-f151f4de9316", 00:16:40.193 "is_configured": true, 00:16:40.193 "data_offset": 2048, 00:16:40.193 "data_size": 63488 00:16:40.193 }, 00:16:40.193 { 00:16:40.193 "name": "BaseBdev4", 00:16:40.193 "uuid": "de4889f5-7e58-5ec6-9d4d-ba35a7cecf66", 00:16:40.193 "is_configured": true, 00:16:40.193 "data_offset": 2048, 00:16:40.193 "data_size": 63488 00:16:40.193 } 00:16:40.193 ] 00:16:40.193 }' 00:16:40.193 14:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:40.193 14:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:40.193 14:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:40.193 14:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:40.193 14:16:17 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@711 -- # sleep 1 00:16:40.453 77.14 IOPS, 231.43 MiB/s [2024-11-27T14:16:17.731Z] [2024-11-27 14:16:17.621663] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:16:40.453 [2024-11-27 14:16:17.726214] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:16:40.712 [2024-11-27 14:16:17.731281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:41.279 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:16:41.279 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:41.279 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.279 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:41.279 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:41.279 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.279 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.279 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.279 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.279 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:41.279 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.279 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.279 "name": "raid_bdev1", 00:16:41.279 "uuid": "0c8764aa-b886-495b-b7cd-88652764e86d", 00:16:41.279 "strip_size_kb": 0, 00:16:41.279 "state": "online", 00:16:41.279 "raid_level": "raid1", 00:16:41.279 "superblock": true, 00:16:41.279 "num_base_bdevs": 4, 00:16:41.279 "num_base_bdevs_discovered": 3, 00:16:41.279 "num_base_bdevs_operational": 3, 00:16:41.279 "base_bdevs_list": [ 00:16:41.279 { 00:16:41.279 "name": "spare", 00:16:41.279 "uuid": "cf37fb7d-e8a8-5c01-938d-6071c64abe1a", 00:16:41.279 "is_configured": true, 00:16:41.279 "data_offset": 2048, 00:16:41.279 "data_size": 63488 00:16:41.279 }, 00:16:41.279 { 00:16:41.279 "name": null, 00:16:41.279 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.279 "is_configured": false, 00:16:41.279 "data_offset": 0, 00:16:41.279 "data_size": 63488 00:16:41.279 }, 00:16:41.279 { 00:16:41.279 "name": "BaseBdev3", 00:16:41.279 "uuid": "8a04e0e1-8904-5e2a-9703-f151f4de9316", 00:16:41.279 "is_configured": true, 00:16:41.279 "data_offset": 2048, 00:16:41.279 "data_size": 63488 00:16:41.279 }, 00:16:41.279 { 00:16:41.279 "name": "BaseBdev4", 00:16:41.279 "uuid": "de4889f5-7e58-5ec6-9d4d-ba35a7cecf66", 00:16:41.279 "is_configured": true, 00:16:41.279 "data_offset": 2048, 00:16:41.279 "data_size": 63488 00:16:41.279 } 00:16:41.279 ] 00:16:41.279 }' 00:16:41.279 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.280 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:16:41.280 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.280 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:16:41.280 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@709 -- # break 00:16:41.280 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:41.280 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:41.280 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:41.280 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:41.280 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:41.280 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.280 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.280 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.280 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:41.539 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.539 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:41.539 "name": "raid_bdev1", 00:16:41.539 "uuid": "0c8764aa-b886-495b-b7cd-88652764e86d", 00:16:41.539 "strip_size_kb": 0, 00:16:41.539 "state": "online", 00:16:41.539 "raid_level": "raid1", 00:16:41.539 "superblock": true, 00:16:41.539 "num_base_bdevs": 4, 00:16:41.539 "num_base_bdevs_discovered": 3, 00:16:41.539 "num_base_bdevs_operational": 3, 00:16:41.539 "base_bdevs_list": [ 00:16:41.539 { 00:16:41.539 "name": "spare", 00:16:41.539 "uuid": "cf37fb7d-e8a8-5c01-938d-6071c64abe1a", 00:16:41.539 "is_configured": true, 00:16:41.539 "data_offset": 2048, 00:16:41.539 "data_size": 63488 00:16:41.539 }, 00:16:41.539 { 00:16:41.539 "name": null, 00:16:41.539 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.539 "is_configured": false, 00:16:41.539 "data_offset": 0, 00:16:41.539 "data_size": 63488 00:16:41.539 }, 00:16:41.539 { 00:16:41.539 "name": "BaseBdev3", 00:16:41.539 "uuid": "8a04e0e1-8904-5e2a-9703-f151f4de9316", 00:16:41.539 "is_configured": true, 00:16:41.539 "data_offset": 2048, 00:16:41.539 "data_size": 63488 00:16:41.539 }, 00:16:41.539 { 00:16:41.539 "name": "BaseBdev4", 00:16:41.539 "uuid": "de4889f5-7e58-5ec6-9d4d-ba35a7cecf66", 00:16:41.539 "is_configured": true, 00:16:41.539 "data_offset": 2048, 00:16:41.539 "data_size": 63488 00:16:41.539 } 00:16:41.539 ] 00:16:41.539 }' 00:16:41.539 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:41.539 72.00 IOPS, 216.00 MiB/s [2024-11-27T14:16:18.817Z] 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:41.539 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:41.539 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:41.539 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:41.539 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:41.539 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:41.539 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:41.539 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:41.539 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:41.539 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:41.539 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:41.539 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:41.539 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:41.539 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:41.539 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.539 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:41.539 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:41.539 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.539 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:41.539 "name": "raid_bdev1", 00:16:41.539 "uuid": "0c8764aa-b886-495b-b7cd-88652764e86d", 00:16:41.539 "strip_size_kb": 0, 00:16:41.539 "state": "online", 00:16:41.539 "raid_level": "raid1", 00:16:41.539 "superblock": true, 00:16:41.539 "num_base_bdevs": 4, 00:16:41.540 "num_base_bdevs_discovered": 3, 00:16:41.540 "num_base_bdevs_operational": 3, 00:16:41.540 "base_bdevs_list": [ 00:16:41.540 { 00:16:41.540 "name": "spare", 00:16:41.540 "uuid": "cf37fb7d-e8a8-5c01-938d-6071c64abe1a", 00:16:41.540 "is_configured": true, 00:16:41.540 "data_offset": 2048, 00:16:41.540 "data_size": 63488 00:16:41.540 }, 00:16:41.540 { 00:16:41.540 "name": null, 00:16:41.540 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:41.540 "is_configured": false, 00:16:41.540 "data_offset": 0, 00:16:41.540 "data_size": 63488 00:16:41.540 }, 00:16:41.540 { 00:16:41.540 "name": "BaseBdev3", 00:16:41.540 "uuid": "8a04e0e1-8904-5e2a-9703-f151f4de9316", 00:16:41.540 "is_configured": true, 00:16:41.540 "data_offset": 2048, 00:16:41.540 "data_size": 63488 00:16:41.540 }, 00:16:41.540 { 00:16:41.540 "name": "BaseBdev4", 00:16:41.540 "uuid": "de4889f5-7e58-5ec6-9d4d-ba35a7cecf66", 00:16:41.540 "is_configured": true, 00:16:41.540 "data_offset": 2048, 00:16:41.540 "data_size": 63488 00:16:41.540 } 00:16:41.540 ] 00:16:41.540 }' 00:16:41.540 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:41.540 14:16:18 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:42.108 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:16:42.108 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.108 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:42.108 [2024-11-27 14:16:19.253254] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:42.108 [2024-11-27 14:16:19.253461] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:42.108 00:16:42.108 Latency(us) 00:16:42.108 [2024-11-27T14:16:19.386Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:42.108 Job: raid_bdev1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 2, IO size: 3145728) 00:16:42.108 raid_bdev1 : 8.67 68.86 206.59 0.00 0.00 20824.36 281.13 113436.86 00:16:42.108 [2024-11-27T14:16:19.386Z] =================================================================================================================== 00:16:42.109 [2024-11-27T14:16:19.387Z] Total : 68.86 206.59 0.00 0.00 20824.36 281.13 113436.86 00:16:42.109 [2024-11-27 14:16:19.300183] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:42.109 [2024-11-27 14:16:19.300501] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:42.109 { 00:16:42.109 "results": [ 00:16:42.109 { 00:16:42.109 "job": "raid_bdev1", 00:16:42.109 "core_mask": "0x1", 00:16:42.109 "workload": "randrw", 00:16:42.109 "percentage": 50, 00:16:42.109 "status": "finished", 00:16:42.109 "queue_depth": 2, 00:16:42.109 "io_size": 3145728, 00:16:42.109 "runtime": 8.669215, 00:16:42.109 "iops": 68.8643666122019, 00:16:42.109 "mibps": 206.59309983660572, 00:16:42.109 "io_failed": 0, 00:16:42.109 "io_timeout": 0, 00:16:42.109 "avg_latency_us": 20824.355279427444, 00:16:42.109 "min_latency_us": 281.13454545454545, 00:16:42.109 "max_latency_us": 113436.85818181818 00:16:42.109 } 00:16:42.109 ], 00:16:42.109 "core_count": 1 00:16:42.109 } 00:16:42.109 [2024-11-27 14:16:19.300677] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:42.109 [2024-11-27 14:16:19.300703] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:16:42.109 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.109 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:42.109 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # jq length 00:16:42.109 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.109 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:42.109 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.109 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:16:42.109 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:16:42.109 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@723 -- # '[' true = true ']' 00:16:42.109 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@725 -- # nbd_start_disks /var/tmp/spdk.sock spare /dev/nbd0 00:16:42.109 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:42.109 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('spare') 00:16:42.109 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:42.109 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:16:42.109 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:42.109 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:42.109 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:42.109 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:42.109 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd0 00:16:42.676 /dev/nbd0 00:16:42.676 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:16:42.676 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:16:42.676 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:16:42.676 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:42.677 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:42.677 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:42.677 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:16:42.677 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:42.677 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:42.677 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:42.677 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:42.677 1+0 records in 00:16:42.677 1+0 records out 00:16:42.677 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00039437 s, 10.4 MB/s 00:16:42.677 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:42.677 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:42.677 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:42.677 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:42.677 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:42.677 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:42.677 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:42.677 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:42.677 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z '' ']' 00:16:42.677 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@728 -- # continue 00:16:42.677 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:42.677 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev3 ']' 00:16:42.677 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev3 /dev/nbd1 00:16:42.677 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:42.677 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev3') 00:16:42.677 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:42.677 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:42.677 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:42.677 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:42.677 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:42.677 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:42.677 14:16:19 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev3 /dev/nbd1 00:16:42.935 /dev/nbd1 00:16:42.935 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:42.935 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:42.935 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:42.935 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:42.935 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:42.935 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:42.935 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:42.935 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:42.935 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:42.935 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:42.936 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:42.936 1+0 records in 00:16:42.936 1+0 records out 00:16:42.936 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000710297 s, 5.8 MB/s 00:16:42.936 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:42.936 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:42.936 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:42.936 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:42.936 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:42.936 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:42.936 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:42.936 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:43.194 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:43.194 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:43.194 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:43.194 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:43.194 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:43.194 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:43.194 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:43.453 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:43.453 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:43.453 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:43.453 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:43.453 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:43.453 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:43.453 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:43.453 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:43.453 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@726 -- # for bdev in "${base_bdevs[@]:1}" 00:16:43.453 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@727 -- # '[' -z BaseBdev4 ']' 00:16:43.453 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@730 -- # nbd_start_disks /var/tmp/spdk.sock BaseBdev4 /dev/nbd1 00:16:43.453 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:16:43.453 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev4') 00:16:43.453 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@10 -- # local bdev_list 00:16:43.453 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd1') 00:16:43.453 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@11 -- # local nbd_list 00:16:43.453 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@12 -- # local i 00:16:43.453 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:16:43.453 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:43.453 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev4 /dev/nbd1 00:16:43.711 /dev/nbd1 00:16:43.711 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:16:43.711 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:16:43.711 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:16:43.711 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@873 -- # local i 00:16:43.711 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:16:43.711 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:16:43.711 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:16:43.711 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@877 -- # break 00:16:43.711 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:16:43.711 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:16:43.711 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:16:43.711 1+0 records in 00:16:43.711 1+0 records out 00:16:43.711 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000312588 s, 13.1 MB/s 00:16:43.711 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:43.711 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@890 -- # size=4096 00:16:43.711 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:16:43.711 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:16:43.711 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@893 -- # return 0 00:16:43.711 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:16:43.711 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:16:43.711 14:16:20 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@731 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:16:43.970 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@732 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd1 00:16:43.970 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:43.970 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd1') 00:16:43.970 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:43.970 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:43.970 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:43.970 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:16:44.248 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:16:44.248 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:16:44.248 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:16:44.248 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:44.248 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:44.248 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:16:44.248 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:44.248 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:44.248 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@734 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:16:44.248 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:16:44.248 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:16:44.248 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@50 -- # local nbd_list 00:16:44.248 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@51 -- # local i 00:16:44.248 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:16:44.248 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@41 -- # break 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/nbd_common.sh@45 -- # return 0 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:44.507 [2024-11-27 14:16:21.657676] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:44.507 [2024-11-27 14:16:21.657755] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.507 [2024-11-27 14:16:21.657810] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:16:44.507 [2024-11-27 14:16:21.657829] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.507 [2024-11-27 14:16:21.660821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.507 [2024-11-27 14:16:21.660884] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:44.507 [2024-11-27 14:16:21.661013] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:44.507 [2024-11-27 14:16:21.661105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:44.507 [2024-11-27 14:16:21.661307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:44.507 [2024-11-27 14:16:21.661489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:44.507 spare 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:44.507 [2024-11-27 14:16:21.761623] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:16:44.507 [2024-11-27 14:16:21.761683] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:44.507 [2024-11-27 14:16:21.762182] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037160 00:16:44.507 [2024-11-27 14:16:21.762460] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:16:44.507 [2024-11-27 14:16:21.762477] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:16:44.507 [2024-11-27 14:16:21.762803] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.507 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:44.768 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.768 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:44.768 "name": "raid_bdev1", 00:16:44.768 "uuid": "0c8764aa-b886-495b-b7cd-88652764e86d", 00:16:44.768 "strip_size_kb": 0, 00:16:44.768 "state": "online", 00:16:44.768 "raid_level": "raid1", 00:16:44.768 "superblock": true, 00:16:44.768 "num_base_bdevs": 4, 00:16:44.768 "num_base_bdevs_discovered": 3, 00:16:44.768 "num_base_bdevs_operational": 3, 00:16:44.768 "base_bdevs_list": [ 00:16:44.768 { 00:16:44.768 "name": "spare", 00:16:44.768 "uuid": "cf37fb7d-e8a8-5c01-938d-6071c64abe1a", 00:16:44.768 "is_configured": true, 00:16:44.768 "data_offset": 2048, 00:16:44.768 "data_size": 63488 00:16:44.768 }, 00:16:44.768 { 00:16:44.768 "name": null, 00:16:44.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:44.768 "is_configured": false, 00:16:44.768 "data_offset": 2048, 00:16:44.768 "data_size": 63488 00:16:44.768 }, 00:16:44.769 { 00:16:44.769 "name": "BaseBdev3", 00:16:44.769 "uuid": "8a04e0e1-8904-5e2a-9703-f151f4de9316", 00:16:44.769 "is_configured": true, 00:16:44.769 "data_offset": 2048, 00:16:44.769 "data_size": 63488 00:16:44.769 }, 00:16:44.769 { 00:16:44.769 "name": "BaseBdev4", 00:16:44.769 "uuid": "de4889f5-7e58-5ec6-9d4d-ba35a7cecf66", 00:16:44.769 "is_configured": true, 00:16:44.769 "data_offset": 2048, 00:16:44.769 "data_size": 63488 00:16:44.769 } 00:16:44.769 ] 00:16:44.769 }' 00:16:44.769 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:44.769 14:16:21 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:45.337 "name": "raid_bdev1", 00:16:45.337 "uuid": "0c8764aa-b886-495b-b7cd-88652764e86d", 00:16:45.337 "strip_size_kb": 0, 00:16:45.337 "state": "online", 00:16:45.337 "raid_level": "raid1", 00:16:45.337 "superblock": true, 00:16:45.337 "num_base_bdevs": 4, 00:16:45.337 "num_base_bdevs_discovered": 3, 00:16:45.337 "num_base_bdevs_operational": 3, 00:16:45.337 "base_bdevs_list": [ 00:16:45.337 { 00:16:45.337 "name": "spare", 00:16:45.337 "uuid": "cf37fb7d-e8a8-5c01-938d-6071c64abe1a", 00:16:45.337 "is_configured": true, 00:16:45.337 "data_offset": 2048, 00:16:45.337 "data_size": 63488 00:16:45.337 }, 00:16:45.337 { 00:16:45.337 "name": null, 00:16:45.337 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.337 "is_configured": false, 00:16:45.337 "data_offset": 2048, 00:16:45.337 "data_size": 63488 00:16:45.337 }, 00:16:45.337 { 00:16:45.337 "name": "BaseBdev3", 00:16:45.337 "uuid": "8a04e0e1-8904-5e2a-9703-f151f4de9316", 00:16:45.337 "is_configured": true, 00:16:45.337 "data_offset": 2048, 00:16:45.337 "data_size": 63488 00:16:45.337 }, 00:16:45.337 { 00:16:45.337 "name": "BaseBdev4", 00:16:45.337 "uuid": "de4889f5-7e58-5ec6-9d4d-ba35a7cecf66", 00:16:45.337 "is_configured": true, 00:16:45.337 "data_offset": 2048, 00:16:45.337 "data_size": 63488 00:16:45.337 } 00:16:45.337 ] 00:16:45.337 }' 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.337 [2024-11-27 14:16:22.531056] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.337 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:45.337 "name": "raid_bdev1", 00:16:45.338 "uuid": "0c8764aa-b886-495b-b7cd-88652764e86d", 00:16:45.338 "strip_size_kb": 0, 00:16:45.338 "state": "online", 00:16:45.338 "raid_level": "raid1", 00:16:45.338 "superblock": true, 00:16:45.338 "num_base_bdevs": 4, 00:16:45.338 "num_base_bdevs_discovered": 2, 00:16:45.338 "num_base_bdevs_operational": 2, 00:16:45.338 "base_bdevs_list": [ 00:16:45.338 { 00:16:45.338 "name": null, 00:16:45.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.338 "is_configured": false, 00:16:45.338 "data_offset": 0, 00:16:45.338 "data_size": 63488 00:16:45.338 }, 00:16:45.338 { 00:16:45.338 "name": null, 00:16:45.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:45.338 "is_configured": false, 00:16:45.338 "data_offset": 2048, 00:16:45.338 "data_size": 63488 00:16:45.338 }, 00:16:45.338 { 00:16:45.338 "name": "BaseBdev3", 00:16:45.338 "uuid": "8a04e0e1-8904-5e2a-9703-f151f4de9316", 00:16:45.338 "is_configured": true, 00:16:45.338 "data_offset": 2048, 00:16:45.338 "data_size": 63488 00:16:45.338 }, 00:16:45.338 { 00:16:45.338 "name": "BaseBdev4", 00:16:45.338 "uuid": "de4889f5-7e58-5ec6-9d4d-ba35a7cecf66", 00:16:45.338 "is_configured": true, 00:16:45.338 "data_offset": 2048, 00:16:45.338 "data_size": 63488 00:16:45.338 } 00:16:45.338 ] 00:16:45.338 }' 00:16:45.338 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:45.338 14:16:22 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.903 14:16:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:16:45.903 14:16:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.903 14:16:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:45.903 [2024-11-27 14:16:23.059427] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:45.903 [2024-11-27 14:16:23.059855] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:45.903 [2024-11-27 14:16:23.059892] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:45.903 [2024-11-27 14:16:23.059943] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:45.903 [2024-11-27 14:16:23.074660] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037230 00:16:45.903 14:16:23 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.903 14:16:23 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@757 -- # sleep 1 00:16:45.903 [2024-11-27 14:16:23.077254] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:46.839 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:46.839 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:46.839 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:46.839 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:46.839 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:46.839 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:46.839 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.839 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:46.839 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:46.839 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.099 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:47.099 "name": "raid_bdev1", 00:16:47.099 "uuid": "0c8764aa-b886-495b-b7cd-88652764e86d", 00:16:47.099 "strip_size_kb": 0, 00:16:47.099 "state": "online", 00:16:47.099 "raid_level": "raid1", 00:16:47.099 "superblock": true, 00:16:47.099 "num_base_bdevs": 4, 00:16:47.099 "num_base_bdevs_discovered": 3, 00:16:47.099 "num_base_bdevs_operational": 3, 00:16:47.099 "process": { 00:16:47.099 "type": "rebuild", 00:16:47.099 "target": "spare", 00:16:47.099 "progress": { 00:16:47.099 "blocks": 20480, 00:16:47.099 "percent": 32 00:16:47.099 } 00:16:47.099 }, 00:16:47.099 "base_bdevs_list": [ 00:16:47.099 { 00:16:47.099 "name": "spare", 00:16:47.099 "uuid": "cf37fb7d-e8a8-5c01-938d-6071c64abe1a", 00:16:47.099 "is_configured": true, 00:16:47.099 "data_offset": 2048, 00:16:47.099 "data_size": 63488 00:16:47.099 }, 00:16:47.099 { 00:16:47.099 "name": null, 00:16:47.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.099 "is_configured": false, 00:16:47.099 "data_offset": 2048, 00:16:47.099 "data_size": 63488 00:16:47.099 }, 00:16:47.099 { 00:16:47.099 "name": "BaseBdev3", 00:16:47.099 "uuid": "8a04e0e1-8904-5e2a-9703-f151f4de9316", 00:16:47.099 "is_configured": true, 00:16:47.099 "data_offset": 2048, 00:16:47.099 "data_size": 63488 00:16:47.099 }, 00:16:47.099 { 00:16:47.099 "name": "BaseBdev4", 00:16:47.099 "uuid": "de4889f5-7e58-5ec6-9d4d-ba35a7cecf66", 00:16:47.099 "is_configured": true, 00:16:47.099 "data_offset": 2048, 00:16:47.099 "data_size": 63488 00:16:47.099 } 00:16:47.099 ] 00:16:47.099 }' 00:16:47.099 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:47.099 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:47.099 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:47.099 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:47.099 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:16:47.099 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.099 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.099 [2024-11-27 14:16:24.251299] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:47.099 [2024-11-27 14:16:24.286401] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:47.099 [2024-11-27 14:16:24.286527] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.099 [2024-11-27 14:16:24.286560] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:47.099 [2024-11-27 14:16:24.286575] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:47.099 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.099 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:47.099 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:47.099 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:47.099 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:47.099 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:47.099 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:47.099 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:47.099 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:47.099 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:47.099 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:47.099 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:47.099 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.099 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.099 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.099 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.099 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:47.099 "name": "raid_bdev1", 00:16:47.099 "uuid": "0c8764aa-b886-495b-b7cd-88652764e86d", 00:16:47.099 "strip_size_kb": 0, 00:16:47.099 "state": "online", 00:16:47.099 "raid_level": "raid1", 00:16:47.099 "superblock": true, 00:16:47.099 "num_base_bdevs": 4, 00:16:47.099 "num_base_bdevs_discovered": 2, 00:16:47.099 "num_base_bdevs_operational": 2, 00:16:47.099 "base_bdevs_list": [ 00:16:47.099 { 00:16:47.099 "name": null, 00:16:47.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.099 "is_configured": false, 00:16:47.099 "data_offset": 0, 00:16:47.099 "data_size": 63488 00:16:47.099 }, 00:16:47.099 { 00:16:47.099 "name": null, 00:16:47.099 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.100 "is_configured": false, 00:16:47.100 "data_offset": 2048, 00:16:47.100 "data_size": 63488 00:16:47.100 }, 00:16:47.100 { 00:16:47.100 "name": "BaseBdev3", 00:16:47.100 "uuid": "8a04e0e1-8904-5e2a-9703-f151f4de9316", 00:16:47.100 "is_configured": true, 00:16:47.100 "data_offset": 2048, 00:16:47.100 "data_size": 63488 00:16:47.100 }, 00:16:47.100 { 00:16:47.100 "name": "BaseBdev4", 00:16:47.100 "uuid": "de4889f5-7e58-5ec6-9d4d-ba35a7cecf66", 00:16:47.100 "is_configured": true, 00:16:47.100 "data_offset": 2048, 00:16:47.100 "data_size": 63488 00:16:47.100 } 00:16:47.100 ] 00:16:47.100 }' 00:16:47.100 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:47.100 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.668 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:16:47.668 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.668 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:47.668 [2024-11-27 14:16:24.841033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:16:47.668 [2024-11-27 14:16:24.841114] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:47.668 [2024-11-27 14:16:24.841154] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:16:47.668 [2024-11-27 14:16:24.841172] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:47.668 [2024-11-27 14:16:24.841825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:47.668 [2024-11-27 14:16:24.841874] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:16:47.668 [2024-11-27 14:16:24.841990] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:16:47.668 [2024-11-27 14:16:24.842030] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (5) smaller than existing raid bdev raid_bdev1 (6) 00:16:47.668 [2024-11-27 14:16:24.842044] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:16:47.668 [2024-11-27 14:16:24.842078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:16:47.668 [2024-11-27 14:16:24.856179] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000037300 00:16:47.668 spare 00:16:47.668 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.668 14:16:24 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@764 -- # sleep 1 00:16:47.668 [2024-11-27 14:16:24.858731] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:16:48.626 14:16:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:16:48.626 14:16:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:48.626 14:16:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:16:48.626 14:16:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=spare 00:16:48.626 14:16:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:48.626 14:16:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.626 14:16:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.626 14:16:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.626 14:16:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:48.918 14:16:25 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.918 14:16:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:48.918 "name": "raid_bdev1", 00:16:48.918 "uuid": "0c8764aa-b886-495b-b7cd-88652764e86d", 00:16:48.918 "strip_size_kb": 0, 00:16:48.918 "state": "online", 00:16:48.918 "raid_level": "raid1", 00:16:48.918 "superblock": true, 00:16:48.918 "num_base_bdevs": 4, 00:16:48.918 "num_base_bdevs_discovered": 3, 00:16:48.918 "num_base_bdevs_operational": 3, 00:16:48.918 "process": { 00:16:48.918 "type": "rebuild", 00:16:48.918 "target": "spare", 00:16:48.918 "progress": { 00:16:48.918 "blocks": 20480, 00:16:48.918 "percent": 32 00:16:48.918 } 00:16:48.918 }, 00:16:48.918 "base_bdevs_list": [ 00:16:48.918 { 00:16:48.918 "name": "spare", 00:16:48.918 "uuid": "cf37fb7d-e8a8-5c01-938d-6071c64abe1a", 00:16:48.918 "is_configured": true, 00:16:48.918 "data_offset": 2048, 00:16:48.918 "data_size": 63488 00:16:48.918 }, 00:16:48.918 { 00:16:48.918 "name": null, 00:16:48.918 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.918 "is_configured": false, 00:16:48.918 "data_offset": 2048, 00:16:48.918 "data_size": 63488 00:16:48.918 }, 00:16:48.918 { 00:16:48.918 "name": "BaseBdev3", 00:16:48.918 "uuid": "8a04e0e1-8904-5e2a-9703-f151f4de9316", 00:16:48.918 "is_configured": true, 00:16:48.918 "data_offset": 2048, 00:16:48.918 "data_size": 63488 00:16:48.918 }, 00:16:48.918 { 00:16:48.918 "name": "BaseBdev4", 00:16:48.918 "uuid": "de4889f5-7e58-5ec6-9d4d-ba35a7cecf66", 00:16:48.918 "is_configured": true, 00:16:48.918 "data_offset": 2048, 00:16:48.918 "data_size": 63488 00:16:48.918 } 00:16:48.918 ] 00:16:48.918 }' 00:16:48.918 14:16:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:48.918 14:16:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:16:48.918 14:16:25 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:48.918 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:16:48.918 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:16:48.918 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.918 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:48.918 [2024-11-27 14:16:26.032519] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:48.918 [2024-11-27 14:16:26.067913] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:16:48.918 [2024-11-27 14:16:26.068160] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:48.918 [2024-11-27 14:16:26.068302] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:16:48.918 [2024-11-27 14:16:26.068355] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:16:48.918 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.918 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:48.919 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:48.919 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:48.919 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:48.919 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:48.919 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:48.919 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:48.919 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:48.919 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:48.919 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:48.919 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:48.919 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:48.919 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.919 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:48.919 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.919 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:48.919 "name": "raid_bdev1", 00:16:48.919 "uuid": "0c8764aa-b886-495b-b7cd-88652764e86d", 00:16:48.919 "strip_size_kb": 0, 00:16:48.919 "state": "online", 00:16:48.919 "raid_level": "raid1", 00:16:48.919 "superblock": true, 00:16:48.919 "num_base_bdevs": 4, 00:16:48.919 "num_base_bdevs_discovered": 2, 00:16:48.919 "num_base_bdevs_operational": 2, 00:16:48.919 "base_bdevs_list": [ 00:16:48.919 { 00:16:48.919 "name": null, 00:16:48.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.919 "is_configured": false, 00:16:48.919 "data_offset": 0, 00:16:48.919 "data_size": 63488 00:16:48.919 }, 00:16:48.919 { 00:16:48.919 "name": null, 00:16:48.919 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:48.919 "is_configured": false, 00:16:48.919 "data_offset": 2048, 00:16:48.919 "data_size": 63488 00:16:48.919 }, 00:16:48.919 { 00:16:48.919 "name": "BaseBdev3", 00:16:48.919 "uuid": "8a04e0e1-8904-5e2a-9703-f151f4de9316", 00:16:48.919 "is_configured": true, 00:16:48.919 "data_offset": 2048, 00:16:48.919 "data_size": 63488 00:16:48.919 }, 00:16:48.919 { 00:16:48.919 "name": "BaseBdev4", 00:16:48.919 "uuid": "de4889f5-7e58-5ec6-9d4d-ba35a7cecf66", 00:16:48.919 "is_configured": true, 00:16:48.919 "data_offset": 2048, 00:16:48.919 "data_size": 63488 00:16:48.919 } 00:16:48.919 ] 00:16:48.919 }' 00:16:48.919 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:48.919 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:49.490 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:49.490 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:49.490 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:49.490 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:49.490 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:49.490 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:49.490 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:49.490 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.490 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:49.490 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.490 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:49.490 "name": "raid_bdev1", 00:16:49.490 "uuid": "0c8764aa-b886-495b-b7cd-88652764e86d", 00:16:49.490 "strip_size_kb": 0, 00:16:49.490 "state": "online", 00:16:49.490 "raid_level": "raid1", 00:16:49.490 "superblock": true, 00:16:49.490 "num_base_bdevs": 4, 00:16:49.490 "num_base_bdevs_discovered": 2, 00:16:49.490 "num_base_bdevs_operational": 2, 00:16:49.490 "base_bdevs_list": [ 00:16:49.490 { 00:16:49.490 "name": null, 00:16:49.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.490 "is_configured": false, 00:16:49.490 "data_offset": 0, 00:16:49.490 "data_size": 63488 00:16:49.490 }, 00:16:49.490 { 00:16:49.490 "name": null, 00:16:49.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.490 "is_configured": false, 00:16:49.490 "data_offset": 2048, 00:16:49.490 "data_size": 63488 00:16:49.490 }, 00:16:49.490 { 00:16:49.490 "name": "BaseBdev3", 00:16:49.490 "uuid": "8a04e0e1-8904-5e2a-9703-f151f4de9316", 00:16:49.490 "is_configured": true, 00:16:49.490 "data_offset": 2048, 00:16:49.490 "data_size": 63488 00:16:49.490 }, 00:16:49.490 { 00:16:49.490 "name": "BaseBdev4", 00:16:49.490 "uuid": "de4889f5-7e58-5ec6-9d4d-ba35a7cecf66", 00:16:49.490 "is_configured": true, 00:16:49.490 "data_offset": 2048, 00:16:49.490 "data_size": 63488 00:16:49.490 } 00:16:49.490 ] 00:16:49.490 }' 00:16:49.490 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:49.490 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:49.490 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:49.750 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:49.750 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:16:49.750 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.750 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:49.750 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.750 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:16:49.750 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.750 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:49.750 [2024-11-27 14:16:26.812510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:16:49.750 [2024-11-27 14:16:26.812603] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:49.750 [2024-11-27 14:16:26.812633] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000cc80 00:16:49.750 [2024-11-27 14:16:26.812646] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:49.750 [2024-11-27 14:16:26.813328] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:49.750 [2024-11-27 14:16:26.813374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:49.750 [2024-11-27 14:16:26.813475] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:16:49.750 [2024-11-27 14:16:26.813495] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:49.750 [2024-11-27 14:16:26.813514] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:49.750 [2024-11-27 14:16:26.813526] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:16:49.750 BaseBdev1 00:16:49.750 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.750 14:16:26 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@775 -- # sleep 1 00:16:50.685 14:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:50.685 14:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:50.685 14:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:50.685 14:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:50.685 14:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:50.685 14:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:50.685 14:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:50.685 14:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:50.685 14:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:50.685 14:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:50.685 14:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:50.685 14:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:50.685 14:16:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.685 14:16:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:50.685 14:16:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.685 14:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:50.685 "name": "raid_bdev1", 00:16:50.685 "uuid": "0c8764aa-b886-495b-b7cd-88652764e86d", 00:16:50.685 "strip_size_kb": 0, 00:16:50.685 "state": "online", 00:16:50.685 "raid_level": "raid1", 00:16:50.685 "superblock": true, 00:16:50.685 "num_base_bdevs": 4, 00:16:50.685 "num_base_bdevs_discovered": 2, 00:16:50.685 "num_base_bdevs_operational": 2, 00:16:50.685 "base_bdevs_list": [ 00:16:50.685 { 00:16:50.685 "name": null, 00:16:50.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.685 "is_configured": false, 00:16:50.685 "data_offset": 0, 00:16:50.685 "data_size": 63488 00:16:50.685 }, 00:16:50.685 { 00:16:50.685 "name": null, 00:16:50.685 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:50.685 "is_configured": false, 00:16:50.685 "data_offset": 2048, 00:16:50.685 "data_size": 63488 00:16:50.685 }, 00:16:50.685 { 00:16:50.685 "name": "BaseBdev3", 00:16:50.685 "uuid": "8a04e0e1-8904-5e2a-9703-f151f4de9316", 00:16:50.685 "is_configured": true, 00:16:50.685 "data_offset": 2048, 00:16:50.685 "data_size": 63488 00:16:50.685 }, 00:16:50.685 { 00:16:50.685 "name": "BaseBdev4", 00:16:50.685 "uuid": "de4889f5-7e58-5ec6-9d4d-ba35a7cecf66", 00:16:50.685 "is_configured": true, 00:16:50.685 "data_offset": 2048, 00:16:50.685 "data_size": 63488 00:16:50.685 } 00:16:50.685 ] 00:16:50.685 }' 00:16:50.685 14:16:27 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:50.685 14:16:27 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:51.251 14:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:51.251 14:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:51.251 14:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:51.251 14:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:51.251 14:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:51.251 14:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:51.251 14:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.251 14:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:51.251 14:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:51.251 14:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.251 14:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:51.251 "name": "raid_bdev1", 00:16:51.252 "uuid": "0c8764aa-b886-495b-b7cd-88652764e86d", 00:16:51.252 "strip_size_kb": 0, 00:16:51.252 "state": "online", 00:16:51.252 "raid_level": "raid1", 00:16:51.252 "superblock": true, 00:16:51.252 "num_base_bdevs": 4, 00:16:51.252 "num_base_bdevs_discovered": 2, 00:16:51.252 "num_base_bdevs_operational": 2, 00:16:51.252 "base_bdevs_list": [ 00:16:51.252 { 00:16:51.252 "name": null, 00:16:51.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.252 "is_configured": false, 00:16:51.252 "data_offset": 0, 00:16:51.252 "data_size": 63488 00:16:51.252 }, 00:16:51.252 { 00:16:51.252 "name": null, 00:16:51.252 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.252 "is_configured": false, 00:16:51.252 "data_offset": 2048, 00:16:51.252 "data_size": 63488 00:16:51.252 }, 00:16:51.252 { 00:16:51.252 "name": "BaseBdev3", 00:16:51.252 "uuid": "8a04e0e1-8904-5e2a-9703-f151f4de9316", 00:16:51.252 "is_configured": true, 00:16:51.252 "data_offset": 2048, 00:16:51.252 "data_size": 63488 00:16:51.252 }, 00:16:51.252 { 00:16:51.252 "name": "BaseBdev4", 00:16:51.252 "uuid": "de4889f5-7e58-5ec6-9d4d-ba35a7cecf66", 00:16:51.252 "is_configured": true, 00:16:51.252 "data_offset": 2048, 00:16:51.252 "data_size": 63488 00:16:51.252 } 00:16:51.252 ] 00:16:51.252 }' 00:16:51.252 14:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:51.252 14:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:51.252 14:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:51.510 14:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:51.510 14:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:51.510 14:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@652 -- # local es=0 00:16:51.510 14:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:51.510 14:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:51.510 14:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:51.510 14:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:51.510 14:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:51.510 14:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:16:51.510 14:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.510 14:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:51.510 [2024-11-27 14:16:28.541491] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:51.510 [2024-11-27 14:16:28.541718] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (6) 00:16:51.510 [2024-11-27 14:16:28.541741] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:16:51.510 request: 00:16:51.510 { 00:16:51.510 "base_bdev": "BaseBdev1", 00:16:51.510 "raid_bdev": "raid_bdev1", 00:16:51.510 "method": "bdev_raid_add_base_bdev", 00:16:51.510 "req_id": 1 00:16:51.510 } 00:16:51.510 Got JSON-RPC error response 00:16:51.510 response: 00:16:51.510 { 00:16:51.510 "code": -22, 00:16:51.510 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:16:51.510 } 00:16:51.510 14:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:51.510 14:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@655 -- # es=1 00:16:51.510 14:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:51.510 14:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:51.510 14:16:28 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:51.510 14:16:28 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@779 -- # sleep 1 00:16:52.477 14:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:52.477 14:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:16:52.477 14:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:52.477 14:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:16:52.477 14:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:16:52.477 14:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:52.477 14:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:52.477 14:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:52.477 14:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:52.477 14:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:52.477 14:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:52.477 14:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:52.477 14:16:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.477 14:16:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:52.477 14:16:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.477 14:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:52.477 "name": "raid_bdev1", 00:16:52.477 "uuid": "0c8764aa-b886-495b-b7cd-88652764e86d", 00:16:52.477 "strip_size_kb": 0, 00:16:52.477 "state": "online", 00:16:52.477 "raid_level": "raid1", 00:16:52.477 "superblock": true, 00:16:52.477 "num_base_bdevs": 4, 00:16:52.477 "num_base_bdevs_discovered": 2, 00:16:52.477 "num_base_bdevs_operational": 2, 00:16:52.477 "base_bdevs_list": [ 00:16:52.477 { 00:16:52.477 "name": null, 00:16:52.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.477 "is_configured": false, 00:16:52.477 "data_offset": 0, 00:16:52.477 "data_size": 63488 00:16:52.477 }, 00:16:52.477 { 00:16:52.477 "name": null, 00:16:52.477 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.477 "is_configured": false, 00:16:52.477 "data_offset": 2048, 00:16:52.477 "data_size": 63488 00:16:52.477 }, 00:16:52.477 { 00:16:52.477 "name": "BaseBdev3", 00:16:52.477 "uuid": "8a04e0e1-8904-5e2a-9703-f151f4de9316", 00:16:52.477 "is_configured": true, 00:16:52.477 "data_offset": 2048, 00:16:52.477 "data_size": 63488 00:16:52.477 }, 00:16:52.477 { 00:16:52.477 "name": "BaseBdev4", 00:16:52.477 "uuid": "de4889f5-7e58-5ec6-9d4d-ba35a7cecf66", 00:16:52.477 "is_configured": true, 00:16:52.477 "data_offset": 2048, 00:16:52.477 "data_size": 63488 00:16:52.477 } 00:16:52.477 ] 00:16:52.477 }' 00:16:52.477 14:16:29 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:52.477 14:16:29 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:53.045 14:16:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:16:53.045 14:16:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:16:53.045 14:16:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:16:53.045 14:16:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@171 -- # local target=none 00:16:53.045 14:16:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:16:53.045 14:16:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:53.045 14:16:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:53.045 14:16:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.045 14:16:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:53.045 14:16:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.045 14:16:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:16:53.045 "name": "raid_bdev1", 00:16:53.045 "uuid": "0c8764aa-b886-495b-b7cd-88652764e86d", 00:16:53.045 "strip_size_kb": 0, 00:16:53.045 "state": "online", 00:16:53.045 "raid_level": "raid1", 00:16:53.045 "superblock": true, 00:16:53.045 "num_base_bdevs": 4, 00:16:53.045 "num_base_bdevs_discovered": 2, 00:16:53.045 "num_base_bdevs_operational": 2, 00:16:53.045 "base_bdevs_list": [ 00:16:53.045 { 00:16:53.045 "name": null, 00:16:53.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.045 "is_configured": false, 00:16:53.045 "data_offset": 0, 00:16:53.045 "data_size": 63488 00:16:53.045 }, 00:16:53.045 { 00:16:53.045 "name": null, 00:16:53.045 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:53.045 "is_configured": false, 00:16:53.045 "data_offset": 2048, 00:16:53.045 "data_size": 63488 00:16:53.045 }, 00:16:53.045 { 00:16:53.045 "name": "BaseBdev3", 00:16:53.045 "uuid": "8a04e0e1-8904-5e2a-9703-f151f4de9316", 00:16:53.045 "is_configured": true, 00:16:53.046 "data_offset": 2048, 00:16:53.046 "data_size": 63488 00:16:53.046 }, 00:16:53.046 { 00:16:53.046 "name": "BaseBdev4", 00:16:53.046 "uuid": "de4889f5-7e58-5ec6-9d4d-ba35a7cecf66", 00:16:53.046 "is_configured": true, 00:16:53.046 "data_offset": 2048, 00:16:53.046 "data_size": 63488 00:16:53.046 } 00:16:53.046 ] 00:16:53.046 }' 00:16:53.046 14:16:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:16:53.046 14:16:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:16:53.046 14:16:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:16:53.046 14:16:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:16:53.046 14:16:30 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@784 -- # killprocess 79394 00:16:53.046 14:16:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@954 -- # '[' -z 79394 ']' 00:16:53.046 14:16:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@958 -- # kill -0 79394 00:16:53.046 14:16:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # uname 00:16:53.046 14:16:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:53.046 14:16:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79394 00:16:53.046 killing process with pid 79394 00:16:53.046 Received shutdown signal, test time was about 19.635586 seconds 00:16:53.046 00:16:53.046 Latency(us) 00:16:53.046 [2024-11-27T14:16:30.324Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:53.046 [2024-11-27T14:16:30.324Z] =================================================================================================================== 00:16:53.046 [2024-11-27T14:16:30.324Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:16:53.046 14:16:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:53.046 14:16:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:53.046 14:16:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79394' 00:16:53.046 14:16:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@973 -- # kill 79394 00:16:53.046 14:16:30 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@978 -- # wait 79394 00:16:53.046 [2024-11-27 14:16:30.247495] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:53.046 [2024-11-27 14:16:30.247649] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:53.046 [2024-11-27 14:16:30.247800] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:53.046 [2024-11-27 14:16:30.247826] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:16:53.613 [2024-11-27 14:16:30.599573] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:54.547 14:16:31 bdev_raid.raid_rebuild_test_sb_io -- bdev/bdev_raid.sh@786 -- # return 0 00:16:54.547 00:16:54.547 real 0m23.311s 00:16:54.547 user 0m31.915s 00:16:54.547 sys 0m2.405s 00:16:54.547 14:16:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:54.547 ************************************ 00:16:54.547 END TEST raid_rebuild_test_sb_io 00:16:54.547 ************************************ 00:16:54.547 14:16:31 bdev_raid.raid_rebuild_test_sb_io -- common/autotest_common.sh@10 -- # set +x 00:16:54.547 14:16:31 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:16:54.548 14:16:31 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 3 false 00:16:54.548 14:16:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:16:54.548 14:16:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:54.548 14:16:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:54.548 ************************************ 00:16:54.548 START TEST raid5f_state_function_test 00:16:54.548 ************************************ 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 false 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:16:54.548 Process raid pid: 80133 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=80133 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80133' 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 80133 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 80133 ']' 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:54.548 14:16:31 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:54.806 [2024-11-27 14:16:31.860501] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:16:54.806 [2024-11-27 14:16:31.861357] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:54.806 [2024-11-27 14:16:32.046476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:55.064 [2024-11-27 14:16:32.177018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:55.322 [2024-11-27 14:16:32.381805] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:55.322 [2024-11-27 14:16:32.381887] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:55.891 14:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:55.891 14:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:16:55.891 14:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:55.891 14:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.891 14:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.891 [2024-11-27 14:16:32.866885] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:55.891 [2024-11-27 14:16:32.866948] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:55.891 [2024-11-27 14:16:32.866971] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:55.891 [2024-11-27 14:16:32.866988] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:55.891 [2024-11-27 14:16:32.866998] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:55.891 [2024-11-27 14:16:32.867011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:55.891 14:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.891 14:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:55.891 14:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:55.891 14:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:55.891 14:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:55.891 14:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:55.891 14:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:55.891 14:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:55.891 14:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:55.891 14:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:55.891 14:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:55.891 14:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:55.891 14:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.891 14:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:55.891 14:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.891 14:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.891 14:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:55.891 "name": "Existed_Raid", 00:16:55.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.891 "strip_size_kb": 64, 00:16:55.891 "state": "configuring", 00:16:55.891 "raid_level": "raid5f", 00:16:55.891 "superblock": false, 00:16:55.891 "num_base_bdevs": 3, 00:16:55.891 "num_base_bdevs_discovered": 0, 00:16:55.891 "num_base_bdevs_operational": 3, 00:16:55.891 "base_bdevs_list": [ 00:16:55.891 { 00:16:55.891 "name": "BaseBdev1", 00:16:55.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.891 "is_configured": false, 00:16:55.891 "data_offset": 0, 00:16:55.891 "data_size": 0 00:16:55.891 }, 00:16:55.891 { 00:16:55.891 "name": "BaseBdev2", 00:16:55.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.891 "is_configured": false, 00:16:55.891 "data_offset": 0, 00:16:55.891 "data_size": 0 00:16:55.891 }, 00:16:55.891 { 00:16:55.891 "name": "BaseBdev3", 00:16:55.891 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.891 "is_configured": false, 00:16:55.891 "data_offset": 0, 00:16:55.891 "data_size": 0 00:16:55.891 } 00:16:55.891 ] 00:16:55.891 }' 00:16:55.891 14:16:32 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:55.891 14:16:32 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.150 14:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:56.150 14:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.150 14:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.150 [2024-11-27 14:16:33.394991] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:56.150 [2024-11-27 14:16:33.395237] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:16:56.150 14:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.151 14:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:56.151 14:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.151 14:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.151 [2024-11-27 14:16:33.402973] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:56.151 [2024-11-27 14:16:33.403029] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:56.151 [2024-11-27 14:16:33.403045] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:56.151 [2024-11-27 14:16:33.403061] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:56.151 [2024-11-27 14:16:33.403071] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:56.151 [2024-11-27 14:16:33.403100] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:56.151 14:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.151 14:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:16:56.151 14:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.151 14:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.410 [2024-11-27 14:16:33.447105] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:56.410 BaseBdev1 00:16:56.410 14:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.410 14:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:16:56.410 14:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:16:56.410 14:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:56.410 14:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:56.410 14:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:56.410 14:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:56.410 14:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:56.410 14:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.410 14:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.410 14:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.410 14:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:56.410 14:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.410 14:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.410 [ 00:16:56.410 { 00:16:56.410 "name": "BaseBdev1", 00:16:56.410 "aliases": [ 00:16:56.410 "b1ceff03-1de5-408e-ac39-daf092fba146" 00:16:56.410 ], 00:16:56.410 "product_name": "Malloc disk", 00:16:56.410 "block_size": 512, 00:16:56.410 "num_blocks": 65536, 00:16:56.410 "uuid": "b1ceff03-1de5-408e-ac39-daf092fba146", 00:16:56.410 "assigned_rate_limits": { 00:16:56.410 "rw_ios_per_sec": 0, 00:16:56.410 "rw_mbytes_per_sec": 0, 00:16:56.410 "r_mbytes_per_sec": 0, 00:16:56.410 "w_mbytes_per_sec": 0 00:16:56.410 }, 00:16:56.410 "claimed": true, 00:16:56.410 "claim_type": "exclusive_write", 00:16:56.410 "zoned": false, 00:16:56.410 "supported_io_types": { 00:16:56.410 "read": true, 00:16:56.410 "write": true, 00:16:56.410 "unmap": true, 00:16:56.410 "flush": true, 00:16:56.410 "reset": true, 00:16:56.410 "nvme_admin": false, 00:16:56.410 "nvme_io": false, 00:16:56.410 "nvme_io_md": false, 00:16:56.410 "write_zeroes": true, 00:16:56.410 "zcopy": true, 00:16:56.410 "get_zone_info": false, 00:16:56.410 "zone_management": false, 00:16:56.410 "zone_append": false, 00:16:56.410 "compare": false, 00:16:56.410 "compare_and_write": false, 00:16:56.410 "abort": true, 00:16:56.410 "seek_hole": false, 00:16:56.410 "seek_data": false, 00:16:56.410 "copy": true, 00:16:56.410 "nvme_iov_md": false 00:16:56.410 }, 00:16:56.410 "memory_domains": [ 00:16:56.410 { 00:16:56.410 "dma_device_id": "system", 00:16:56.410 "dma_device_type": 1 00:16:56.410 }, 00:16:56.410 { 00:16:56.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:56.410 "dma_device_type": 2 00:16:56.410 } 00:16:56.410 ], 00:16:56.410 "driver_specific": {} 00:16:56.410 } 00:16:56.410 ] 00:16:56.410 14:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.410 14:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:56.410 14:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:56.410 14:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:56.410 14:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:56.410 14:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:56.410 14:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.410 14:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:56.410 14:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.410 14:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.410 14:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.410 14:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.410 14:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.410 14:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.410 14:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.410 14:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.410 14:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.410 14:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.410 "name": "Existed_Raid", 00:16:56.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.410 "strip_size_kb": 64, 00:16:56.410 "state": "configuring", 00:16:56.410 "raid_level": "raid5f", 00:16:56.410 "superblock": false, 00:16:56.410 "num_base_bdevs": 3, 00:16:56.410 "num_base_bdevs_discovered": 1, 00:16:56.410 "num_base_bdevs_operational": 3, 00:16:56.410 "base_bdevs_list": [ 00:16:56.410 { 00:16:56.410 "name": "BaseBdev1", 00:16:56.410 "uuid": "b1ceff03-1de5-408e-ac39-daf092fba146", 00:16:56.410 "is_configured": true, 00:16:56.410 "data_offset": 0, 00:16:56.410 "data_size": 65536 00:16:56.410 }, 00:16:56.410 { 00:16:56.410 "name": "BaseBdev2", 00:16:56.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.410 "is_configured": false, 00:16:56.410 "data_offset": 0, 00:16:56.410 "data_size": 0 00:16:56.410 }, 00:16:56.410 { 00:16:56.410 "name": "BaseBdev3", 00:16:56.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.410 "is_configured": false, 00:16:56.410 "data_offset": 0, 00:16:56.410 "data_size": 0 00:16:56.410 } 00:16:56.410 ] 00:16:56.410 }' 00:16:56.410 14:16:33 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.410 14:16:33 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.983 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:16:56.983 14:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.983 14:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.983 [2024-11-27 14:16:34.019336] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:56.983 [2024-11-27 14:16:34.019523] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:16:56.983 14:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.983 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:16:56.983 14:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.983 14:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.983 [2024-11-27 14:16:34.027397] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:56.983 [2024-11-27 14:16:34.029899] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:56.983 [2024-11-27 14:16:34.029967] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:56.983 [2024-11-27 14:16:34.029984] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:16:56.983 [2024-11-27 14:16:34.030000] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:16:56.983 14:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.983 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:16:56.983 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:56.983 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:56.983 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:56.983 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:56.983 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:56.983 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:56.983 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:56.983 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:56.983 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:56.983 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:56.983 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:56.983 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:56.983 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:56.983 14:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.983 14:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:56.983 14:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.983 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:56.983 "name": "Existed_Raid", 00:16:56.983 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.983 "strip_size_kb": 64, 00:16:56.983 "state": "configuring", 00:16:56.984 "raid_level": "raid5f", 00:16:56.984 "superblock": false, 00:16:56.984 "num_base_bdevs": 3, 00:16:56.984 "num_base_bdevs_discovered": 1, 00:16:56.984 "num_base_bdevs_operational": 3, 00:16:56.984 "base_bdevs_list": [ 00:16:56.984 { 00:16:56.984 "name": "BaseBdev1", 00:16:56.984 "uuid": "b1ceff03-1de5-408e-ac39-daf092fba146", 00:16:56.984 "is_configured": true, 00:16:56.984 "data_offset": 0, 00:16:56.984 "data_size": 65536 00:16:56.984 }, 00:16:56.984 { 00:16:56.984 "name": "BaseBdev2", 00:16:56.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.984 "is_configured": false, 00:16:56.984 "data_offset": 0, 00:16:56.984 "data_size": 0 00:16:56.984 }, 00:16:56.984 { 00:16:56.984 "name": "BaseBdev3", 00:16:56.984 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:56.984 "is_configured": false, 00:16:56.984 "data_offset": 0, 00:16:56.984 "data_size": 0 00:16:56.984 } 00:16:56.984 ] 00:16:56.984 }' 00:16:56.984 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:56.984 14:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.560 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:57.560 14:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.560 14:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.560 [2024-11-27 14:16:34.604642] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:57.560 BaseBdev2 00:16:57.560 14:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.560 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:16:57.560 14:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:16:57.560 14:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:57.560 14:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:57.560 14:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:57.560 14:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:57.560 14:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:57.560 14:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.560 14:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.560 14:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.560 14:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:57.560 14:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.560 14:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.560 [ 00:16:57.560 { 00:16:57.560 "name": "BaseBdev2", 00:16:57.560 "aliases": [ 00:16:57.560 "b1b2e538-efbf-4932-b015-64a20ea45d39" 00:16:57.560 ], 00:16:57.560 "product_name": "Malloc disk", 00:16:57.560 "block_size": 512, 00:16:57.560 "num_blocks": 65536, 00:16:57.560 "uuid": "b1b2e538-efbf-4932-b015-64a20ea45d39", 00:16:57.560 "assigned_rate_limits": { 00:16:57.560 "rw_ios_per_sec": 0, 00:16:57.560 "rw_mbytes_per_sec": 0, 00:16:57.560 "r_mbytes_per_sec": 0, 00:16:57.560 "w_mbytes_per_sec": 0 00:16:57.560 }, 00:16:57.560 "claimed": true, 00:16:57.560 "claim_type": "exclusive_write", 00:16:57.560 "zoned": false, 00:16:57.560 "supported_io_types": { 00:16:57.560 "read": true, 00:16:57.560 "write": true, 00:16:57.560 "unmap": true, 00:16:57.560 "flush": true, 00:16:57.560 "reset": true, 00:16:57.560 "nvme_admin": false, 00:16:57.560 "nvme_io": false, 00:16:57.560 "nvme_io_md": false, 00:16:57.560 "write_zeroes": true, 00:16:57.560 "zcopy": true, 00:16:57.560 "get_zone_info": false, 00:16:57.560 "zone_management": false, 00:16:57.560 "zone_append": false, 00:16:57.560 "compare": false, 00:16:57.560 "compare_and_write": false, 00:16:57.560 "abort": true, 00:16:57.560 "seek_hole": false, 00:16:57.560 "seek_data": false, 00:16:57.560 "copy": true, 00:16:57.560 "nvme_iov_md": false 00:16:57.560 }, 00:16:57.560 "memory_domains": [ 00:16:57.560 { 00:16:57.560 "dma_device_id": "system", 00:16:57.560 "dma_device_type": 1 00:16:57.560 }, 00:16:57.560 { 00:16:57.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:57.560 "dma_device_type": 2 00:16:57.560 } 00:16:57.560 ], 00:16:57.560 "driver_specific": {} 00:16:57.560 } 00:16:57.560 ] 00:16:57.560 14:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.561 14:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:57.561 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:57.561 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:57.561 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:16:57.561 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:57.561 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:16:57.561 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:57.561 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:57.561 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:57.561 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:57.561 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:57.561 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:57.561 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:57.561 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:57.561 14:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.561 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:57.561 14:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:57.561 14:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.561 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:57.561 "name": "Existed_Raid", 00:16:57.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.561 "strip_size_kb": 64, 00:16:57.561 "state": "configuring", 00:16:57.561 "raid_level": "raid5f", 00:16:57.561 "superblock": false, 00:16:57.561 "num_base_bdevs": 3, 00:16:57.561 "num_base_bdevs_discovered": 2, 00:16:57.561 "num_base_bdevs_operational": 3, 00:16:57.561 "base_bdevs_list": [ 00:16:57.561 { 00:16:57.561 "name": "BaseBdev1", 00:16:57.561 "uuid": "b1ceff03-1de5-408e-ac39-daf092fba146", 00:16:57.561 "is_configured": true, 00:16:57.561 "data_offset": 0, 00:16:57.561 "data_size": 65536 00:16:57.561 }, 00:16:57.561 { 00:16:57.561 "name": "BaseBdev2", 00:16:57.561 "uuid": "b1b2e538-efbf-4932-b015-64a20ea45d39", 00:16:57.561 "is_configured": true, 00:16:57.561 "data_offset": 0, 00:16:57.561 "data_size": 65536 00:16:57.561 }, 00:16:57.561 { 00:16:57.561 "name": "BaseBdev3", 00:16:57.561 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:57.561 "is_configured": false, 00:16:57.561 "data_offset": 0, 00:16:57.561 "data_size": 0 00:16:57.561 } 00:16:57.561 ] 00:16:57.561 }' 00:16:57.561 14:16:34 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:57.561 14:16:34 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.128 14:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:16:58.128 14:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.128 14:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.128 [2024-11-27 14:16:35.213386] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:58.128 [2024-11-27 14:16:35.213454] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:16:58.128 [2024-11-27 14:16:35.213475] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:16:58.128 [2024-11-27 14:16:35.213823] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:16:58.128 [2024-11-27 14:16:35.219026] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:16:58.128 [2024-11-27 14:16:35.219202] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:16:58.129 [2024-11-27 14:16:35.219556] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.129 BaseBdev3 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.129 [ 00:16:58.129 { 00:16:58.129 "name": "BaseBdev3", 00:16:58.129 "aliases": [ 00:16:58.129 "0c839af2-6bb4-44da-b779-4eecb2fea7fc" 00:16:58.129 ], 00:16:58.129 "product_name": "Malloc disk", 00:16:58.129 "block_size": 512, 00:16:58.129 "num_blocks": 65536, 00:16:58.129 "uuid": "0c839af2-6bb4-44da-b779-4eecb2fea7fc", 00:16:58.129 "assigned_rate_limits": { 00:16:58.129 "rw_ios_per_sec": 0, 00:16:58.129 "rw_mbytes_per_sec": 0, 00:16:58.129 "r_mbytes_per_sec": 0, 00:16:58.129 "w_mbytes_per_sec": 0 00:16:58.129 }, 00:16:58.129 "claimed": true, 00:16:58.129 "claim_type": "exclusive_write", 00:16:58.129 "zoned": false, 00:16:58.129 "supported_io_types": { 00:16:58.129 "read": true, 00:16:58.129 "write": true, 00:16:58.129 "unmap": true, 00:16:58.129 "flush": true, 00:16:58.129 "reset": true, 00:16:58.129 "nvme_admin": false, 00:16:58.129 "nvme_io": false, 00:16:58.129 "nvme_io_md": false, 00:16:58.129 "write_zeroes": true, 00:16:58.129 "zcopy": true, 00:16:58.129 "get_zone_info": false, 00:16:58.129 "zone_management": false, 00:16:58.129 "zone_append": false, 00:16:58.129 "compare": false, 00:16:58.129 "compare_and_write": false, 00:16:58.129 "abort": true, 00:16:58.129 "seek_hole": false, 00:16:58.129 "seek_data": false, 00:16:58.129 "copy": true, 00:16:58.129 "nvme_iov_md": false 00:16:58.129 }, 00:16:58.129 "memory_domains": [ 00:16:58.129 { 00:16:58.129 "dma_device_id": "system", 00:16:58.129 "dma_device_type": 1 00:16:58.129 }, 00:16:58.129 { 00:16:58.129 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:58.129 "dma_device_type": 2 00:16:58.129 } 00:16:58.129 ], 00:16:58.129 "driver_specific": {} 00:16:58.129 } 00:16:58.129 ] 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:58.129 "name": "Existed_Raid", 00:16:58.129 "uuid": "003b1728-4cc0-4e2c-9803-d1a3819691e7", 00:16:58.129 "strip_size_kb": 64, 00:16:58.129 "state": "online", 00:16:58.129 "raid_level": "raid5f", 00:16:58.129 "superblock": false, 00:16:58.129 "num_base_bdevs": 3, 00:16:58.129 "num_base_bdevs_discovered": 3, 00:16:58.129 "num_base_bdevs_operational": 3, 00:16:58.129 "base_bdevs_list": [ 00:16:58.129 { 00:16:58.129 "name": "BaseBdev1", 00:16:58.129 "uuid": "b1ceff03-1de5-408e-ac39-daf092fba146", 00:16:58.129 "is_configured": true, 00:16:58.129 "data_offset": 0, 00:16:58.129 "data_size": 65536 00:16:58.129 }, 00:16:58.129 { 00:16:58.129 "name": "BaseBdev2", 00:16:58.129 "uuid": "b1b2e538-efbf-4932-b015-64a20ea45d39", 00:16:58.129 "is_configured": true, 00:16:58.129 "data_offset": 0, 00:16:58.129 "data_size": 65536 00:16:58.129 }, 00:16:58.129 { 00:16:58.129 "name": "BaseBdev3", 00:16:58.129 "uuid": "0c839af2-6bb4-44da-b779-4eecb2fea7fc", 00:16:58.129 "is_configured": true, 00:16:58.129 "data_offset": 0, 00:16:58.129 "data_size": 65536 00:16:58.129 } 00:16:58.129 ] 00:16:58.129 }' 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:58.129 14:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.696 14:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:16:58.696 14:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:16:58.696 14:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:16:58.696 14:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:16:58.696 14:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:16:58.696 14:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:16:58.696 14:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:16:58.696 14:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:16:58.696 14:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.696 14:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.696 [2024-11-27 14:16:35.785905] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:58.696 14:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.696 14:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:16:58.696 "name": "Existed_Raid", 00:16:58.696 "aliases": [ 00:16:58.696 "003b1728-4cc0-4e2c-9803-d1a3819691e7" 00:16:58.696 ], 00:16:58.696 "product_name": "Raid Volume", 00:16:58.696 "block_size": 512, 00:16:58.696 "num_blocks": 131072, 00:16:58.696 "uuid": "003b1728-4cc0-4e2c-9803-d1a3819691e7", 00:16:58.696 "assigned_rate_limits": { 00:16:58.696 "rw_ios_per_sec": 0, 00:16:58.696 "rw_mbytes_per_sec": 0, 00:16:58.696 "r_mbytes_per_sec": 0, 00:16:58.696 "w_mbytes_per_sec": 0 00:16:58.696 }, 00:16:58.696 "claimed": false, 00:16:58.696 "zoned": false, 00:16:58.696 "supported_io_types": { 00:16:58.696 "read": true, 00:16:58.696 "write": true, 00:16:58.696 "unmap": false, 00:16:58.696 "flush": false, 00:16:58.696 "reset": true, 00:16:58.696 "nvme_admin": false, 00:16:58.696 "nvme_io": false, 00:16:58.696 "nvme_io_md": false, 00:16:58.696 "write_zeroes": true, 00:16:58.696 "zcopy": false, 00:16:58.696 "get_zone_info": false, 00:16:58.696 "zone_management": false, 00:16:58.696 "zone_append": false, 00:16:58.696 "compare": false, 00:16:58.696 "compare_and_write": false, 00:16:58.696 "abort": false, 00:16:58.696 "seek_hole": false, 00:16:58.696 "seek_data": false, 00:16:58.696 "copy": false, 00:16:58.696 "nvme_iov_md": false 00:16:58.696 }, 00:16:58.696 "driver_specific": { 00:16:58.696 "raid": { 00:16:58.696 "uuid": "003b1728-4cc0-4e2c-9803-d1a3819691e7", 00:16:58.696 "strip_size_kb": 64, 00:16:58.696 "state": "online", 00:16:58.696 "raid_level": "raid5f", 00:16:58.696 "superblock": false, 00:16:58.696 "num_base_bdevs": 3, 00:16:58.696 "num_base_bdevs_discovered": 3, 00:16:58.696 "num_base_bdevs_operational": 3, 00:16:58.696 "base_bdevs_list": [ 00:16:58.696 { 00:16:58.696 "name": "BaseBdev1", 00:16:58.696 "uuid": "b1ceff03-1de5-408e-ac39-daf092fba146", 00:16:58.696 "is_configured": true, 00:16:58.696 "data_offset": 0, 00:16:58.696 "data_size": 65536 00:16:58.696 }, 00:16:58.696 { 00:16:58.696 "name": "BaseBdev2", 00:16:58.696 "uuid": "b1b2e538-efbf-4932-b015-64a20ea45d39", 00:16:58.696 "is_configured": true, 00:16:58.696 "data_offset": 0, 00:16:58.696 "data_size": 65536 00:16:58.696 }, 00:16:58.696 { 00:16:58.696 "name": "BaseBdev3", 00:16:58.696 "uuid": "0c839af2-6bb4-44da-b779-4eecb2fea7fc", 00:16:58.696 "is_configured": true, 00:16:58.696 "data_offset": 0, 00:16:58.696 "data_size": 65536 00:16:58.696 } 00:16:58.696 ] 00:16:58.696 } 00:16:58.696 } 00:16:58.696 }' 00:16:58.696 14:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:58.696 14:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:16:58.696 BaseBdev2 00:16:58.696 BaseBdev3' 00:16:58.696 14:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.696 14:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:16:58.696 14:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.696 14:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:16:58.696 14:16:35 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.696 14:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.696 14:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.696 14:16:35 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:58.955 [2024-11-27 14:16:36.121766] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:16:58.955 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:58.956 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:58.956 14:16:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.956 14:16:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.215 14:16:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.215 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:16:59.215 "name": "Existed_Raid", 00:16:59.215 "uuid": "003b1728-4cc0-4e2c-9803-d1a3819691e7", 00:16:59.215 "strip_size_kb": 64, 00:16:59.215 "state": "online", 00:16:59.215 "raid_level": "raid5f", 00:16:59.215 "superblock": false, 00:16:59.215 "num_base_bdevs": 3, 00:16:59.215 "num_base_bdevs_discovered": 2, 00:16:59.215 "num_base_bdevs_operational": 2, 00:16:59.215 "base_bdevs_list": [ 00:16:59.215 { 00:16:59.215 "name": null, 00:16:59.215 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:59.215 "is_configured": false, 00:16:59.215 "data_offset": 0, 00:16:59.215 "data_size": 65536 00:16:59.215 }, 00:16:59.215 { 00:16:59.215 "name": "BaseBdev2", 00:16:59.215 "uuid": "b1b2e538-efbf-4932-b015-64a20ea45d39", 00:16:59.215 "is_configured": true, 00:16:59.215 "data_offset": 0, 00:16:59.215 "data_size": 65536 00:16:59.215 }, 00:16:59.215 { 00:16:59.215 "name": "BaseBdev3", 00:16:59.215 "uuid": "0c839af2-6bb4-44da-b779-4eecb2fea7fc", 00:16:59.215 "is_configured": true, 00:16:59.215 "data_offset": 0, 00:16:59.215 "data_size": 65536 00:16:59.215 } 00:16:59.215 ] 00:16:59.215 }' 00:16:59.215 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:16:59.215 14:16:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.474 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:16:59.474 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:59.474 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.474 14:16:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.474 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:59.474 14:16:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.474 14:16:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.830 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:59.830 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:59.830 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:16:59.830 14:16:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.830 14:16:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.830 [2024-11-27 14:16:36.786348] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:59.830 [2024-11-27 14:16:36.786464] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:59.830 [2024-11-27 14:16:36.865778] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:59.830 14:16:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.830 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:59.830 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:59.830 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:16:59.830 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.830 14:16:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.830 14:16:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.830 14:16:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.830 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:16:59.830 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:59.830 14:16:36 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:16:59.830 14:16:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.830 14:16:36 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.830 [2024-11-27 14:16:36.930085] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:59.830 [2024-11-27 14:16:36.930194] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:16:59.830 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.830 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:16:59.830 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:16:59.830 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:16:59.830 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:16:59.830 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.830 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:16:59.830 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.830 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:16:59.830 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:16:59.830 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:16:59.830 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:16:59.830 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:16:59.830 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:16:59.830 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.830 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.090 BaseBdev2 00:17:00.090 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.090 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:00.090 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:00.090 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:00.090 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:00.090 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:00.090 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:00.090 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:00.090 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.090 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.090 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.090 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:00.090 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.090 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.090 [ 00:17:00.090 { 00:17:00.090 "name": "BaseBdev2", 00:17:00.090 "aliases": [ 00:17:00.090 "ebb05827-1fc2-4461-8b19-1aaff6123ca8" 00:17:00.090 ], 00:17:00.091 "product_name": "Malloc disk", 00:17:00.091 "block_size": 512, 00:17:00.091 "num_blocks": 65536, 00:17:00.091 "uuid": "ebb05827-1fc2-4461-8b19-1aaff6123ca8", 00:17:00.091 "assigned_rate_limits": { 00:17:00.091 "rw_ios_per_sec": 0, 00:17:00.091 "rw_mbytes_per_sec": 0, 00:17:00.091 "r_mbytes_per_sec": 0, 00:17:00.091 "w_mbytes_per_sec": 0 00:17:00.091 }, 00:17:00.091 "claimed": false, 00:17:00.091 "zoned": false, 00:17:00.091 "supported_io_types": { 00:17:00.091 "read": true, 00:17:00.091 "write": true, 00:17:00.091 "unmap": true, 00:17:00.091 "flush": true, 00:17:00.091 "reset": true, 00:17:00.091 "nvme_admin": false, 00:17:00.091 "nvme_io": false, 00:17:00.091 "nvme_io_md": false, 00:17:00.091 "write_zeroes": true, 00:17:00.091 "zcopy": true, 00:17:00.091 "get_zone_info": false, 00:17:00.091 "zone_management": false, 00:17:00.091 "zone_append": false, 00:17:00.091 "compare": false, 00:17:00.091 "compare_and_write": false, 00:17:00.091 "abort": true, 00:17:00.091 "seek_hole": false, 00:17:00.091 "seek_data": false, 00:17:00.091 "copy": true, 00:17:00.091 "nvme_iov_md": false 00:17:00.091 }, 00:17:00.091 "memory_domains": [ 00:17:00.091 { 00:17:00.091 "dma_device_id": "system", 00:17:00.091 "dma_device_type": 1 00:17:00.091 }, 00:17:00.091 { 00:17:00.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.091 "dma_device_type": 2 00:17:00.091 } 00:17:00.091 ], 00:17:00.091 "driver_specific": {} 00:17:00.091 } 00:17:00.091 ] 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.091 BaseBdev3 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.091 [ 00:17:00.091 { 00:17:00.091 "name": "BaseBdev3", 00:17:00.091 "aliases": [ 00:17:00.091 "46d750de-a7b6-485b-90b3-0134972f9403" 00:17:00.091 ], 00:17:00.091 "product_name": "Malloc disk", 00:17:00.091 "block_size": 512, 00:17:00.091 "num_blocks": 65536, 00:17:00.091 "uuid": "46d750de-a7b6-485b-90b3-0134972f9403", 00:17:00.091 "assigned_rate_limits": { 00:17:00.091 "rw_ios_per_sec": 0, 00:17:00.091 "rw_mbytes_per_sec": 0, 00:17:00.091 "r_mbytes_per_sec": 0, 00:17:00.091 "w_mbytes_per_sec": 0 00:17:00.091 }, 00:17:00.091 "claimed": false, 00:17:00.091 "zoned": false, 00:17:00.091 "supported_io_types": { 00:17:00.091 "read": true, 00:17:00.091 "write": true, 00:17:00.091 "unmap": true, 00:17:00.091 "flush": true, 00:17:00.091 "reset": true, 00:17:00.091 "nvme_admin": false, 00:17:00.091 "nvme_io": false, 00:17:00.091 "nvme_io_md": false, 00:17:00.091 "write_zeroes": true, 00:17:00.091 "zcopy": true, 00:17:00.091 "get_zone_info": false, 00:17:00.091 "zone_management": false, 00:17:00.091 "zone_append": false, 00:17:00.091 "compare": false, 00:17:00.091 "compare_and_write": false, 00:17:00.091 "abort": true, 00:17:00.091 "seek_hole": false, 00:17:00.091 "seek_data": false, 00:17:00.091 "copy": true, 00:17:00.091 "nvme_iov_md": false 00:17:00.091 }, 00:17:00.091 "memory_domains": [ 00:17:00.091 { 00:17:00.091 "dma_device_id": "system", 00:17:00.091 "dma_device_type": 1 00:17:00.091 }, 00:17:00.091 { 00:17:00.091 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.091 "dma_device_type": 2 00:17:00.091 } 00:17:00.091 ], 00:17:00.091 "driver_specific": {} 00:17:00.091 } 00:17:00.091 ] 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.091 [2024-11-27 14:16:37.228014] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:00.091 [2024-11-27 14:16:37.228067] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:00.091 [2024-11-27 14:16:37.228099] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:00.091 [2024-11-27 14:16:37.230533] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.091 "name": "Existed_Raid", 00:17:00.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.091 "strip_size_kb": 64, 00:17:00.091 "state": "configuring", 00:17:00.091 "raid_level": "raid5f", 00:17:00.091 "superblock": false, 00:17:00.091 "num_base_bdevs": 3, 00:17:00.091 "num_base_bdevs_discovered": 2, 00:17:00.091 "num_base_bdevs_operational": 3, 00:17:00.091 "base_bdevs_list": [ 00:17:00.091 { 00:17:00.091 "name": "BaseBdev1", 00:17:00.091 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.091 "is_configured": false, 00:17:00.091 "data_offset": 0, 00:17:00.091 "data_size": 0 00:17:00.091 }, 00:17:00.091 { 00:17:00.091 "name": "BaseBdev2", 00:17:00.091 "uuid": "ebb05827-1fc2-4461-8b19-1aaff6123ca8", 00:17:00.091 "is_configured": true, 00:17:00.091 "data_offset": 0, 00:17:00.091 "data_size": 65536 00:17:00.091 }, 00:17:00.091 { 00:17:00.091 "name": "BaseBdev3", 00:17:00.091 "uuid": "46d750de-a7b6-485b-90b3-0134972f9403", 00:17:00.091 "is_configured": true, 00:17:00.091 "data_offset": 0, 00:17:00.091 "data_size": 65536 00:17:00.091 } 00:17:00.091 ] 00:17:00.091 }' 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.091 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.660 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:00.660 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.660 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.660 [2024-11-27 14:16:37.764234] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:00.660 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.660 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:00.660 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:00.660 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:00.660 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:00.660 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:00.660 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:00.660 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:00.660 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:00.660 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:00.660 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:00.660 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:00.660 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:00.660 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.660 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:00.660 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.660 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:00.660 "name": "Existed_Raid", 00:17:00.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.660 "strip_size_kb": 64, 00:17:00.660 "state": "configuring", 00:17:00.660 "raid_level": "raid5f", 00:17:00.660 "superblock": false, 00:17:00.660 "num_base_bdevs": 3, 00:17:00.660 "num_base_bdevs_discovered": 1, 00:17:00.660 "num_base_bdevs_operational": 3, 00:17:00.660 "base_bdevs_list": [ 00:17:00.660 { 00:17:00.660 "name": "BaseBdev1", 00:17:00.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:00.660 "is_configured": false, 00:17:00.660 "data_offset": 0, 00:17:00.660 "data_size": 0 00:17:00.660 }, 00:17:00.660 { 00:17:00.660 "name": null, 00:17:00.660 "uuid": "ebb05827-1fc2-4461-8b19-1aaff6123ca8", 00:17:00.660 "is_configured": false, 00:17:00.660 "data_offset": 0, 00:17:00.660 "data_size": 65536 00:17:00.660 }, 00:17:00.660 { 00:17:00.660 "name": "BaseBdev3", 00:17:00.660 "uuid": "46d750de-a7b6-485b-90b3-0134972f9403", 00:17:00.660 "is_configured": true, 00:17:00.660 "data_offset": 0, 00:17:00.660 "data_size": 65536 00:17:00.660 } 00:17:00.660 ] 00:17:00.660 }' 00:17:00.661 14:16:37 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:00.661 14:16:37 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.228 14:16:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.228 14:16:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:01.228 14:16:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.228 14:16:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.228 14:16:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.228 14:16:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:01.228 14:16:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:01.228 14:16:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.228 14:16:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.228 [2024-11-27 14:16:38.412331] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:01.228 BaseBdev1 00:17:01.228 14:16:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.228 14:16:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:01.228 14:16:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:01.228 14:16:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:01.228 14:16:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:01.228 14:16:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:01.228 14:16:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:01.228 14:16:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:01.228 14:16:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.228 14:16:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.228 14:16:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.228 14:16:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:01.228 14:16:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.228 14:16:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.228 [ 00:17:01.228 { 00:17:01.228 "name": "BaseBdev1", 00:17:01.228 "aliases": [ 00:17:01.228 "917f78a9-c1b6-400c-89f3-83fa502ded3c" 00:17:01.228 ], 00:17:01.228 "product_name": "Malloc disk", 00:17:01.228 "block_size": 512, 00:17:01.228 "num_blocks": 65536, 00:17:01.228 "uuid": "917f78a9-c1b6-400c-89f3-83fa502ded3c", 00:17:01.228 "assigned_rate_limits": { 00:17:01.228 "rw_ios_per_sec": 0, 00:17:01.228 "rw_mbytes_per_sec": 0, 00:17:01.228 "r_mbytes_per_sec": 0, 00:17:01.228 "w_mbytes_per_sec": 0 00:17:01.228 }, 00:17:01.228 "claimed": true, 00:17:01.228 "claim_type": "exclusive_write", 00:17:01.228 "zoned": false, 00:17:01.228 "supported_io_types": { 00:17:01.228 "read": true, 00:17:01.229 "write": true, 00:17:01.229 "unmap": true, 00:17:01.229 "flush": true, 00:17:01.229 "reset": true, 00:17:01.229 "nvme_admin": false, 00:17:01.229 "nvme_io": false, 00:17:01.229 "nvme_io_md": false, 00:17:01.229 "write_zeroes": true, 00:17:01.229 "zcopy": true, 00:17:01.229 "get_zone_info": false, 00:17:01.229 "zone_management": false, 00:17:01.229 "zone_append": false, 00:17:01.229 "compare": false, 00:17:01.229 "compare_and_write": false, 00:17:01.229 "abort": true, 00:17:01.229 "seek_hole": false, 00:17:01.229 "seek_data": false, 00:17:01.229 "copy": true, 00:17:01.229 "nvme_iov_md": false 00:17:01.229 }, 00:17:01.229 "memory_domains": [ 00:17:01.229 { 00:17:01.229 "dma_device_id": "system", 00:17:01.229 "dma_device_type": 1 00:17:01.229 }, 00:17:01.229 { 00:17:01.229 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:01.229 "dma_device_type": 2 00:17:01.229 } 00:17:01.229 ], 00:17:01.229 "driver_specific": {} 00:17:01.229 } 00:17:01.229 ] 00:17:01.229 14:16:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.229 14:16:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:01.229 14:16:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:01.229 14:16:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.229 14:16:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.229 14:16:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.229 14:16:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.229 14:16:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:01.229 14:16:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.229 14:16:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.229 14:16:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.229 14:16:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.229 14:16:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.229 14:16:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.229 14:16:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.229 14:16:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.229 14:16:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.229 14:16:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:01.229 "name": "Existed_Raid", 00:17:01.229 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:01.229 "strip_size_kb": 64, 00:17:01.229 "state": "configuring", 00:17:01.229 "raid_level": "raid5f", 00:17:01.229 "superblock": false, 00:17:01.229 "num_base_bdevs": 3, 00:17:01.229 "num_base_bdevs_discovered": 2, 00:17:01.229 "num_base_bdevs_operational": 3, 00:17:01.229 "base_bdevs_list": [ 00:17:01.229 { 00:17:01.229 "name": "BaseBdev1", 00:17:01.229 "uuid": "917f78a9-c1b6-400c-89f3-83fa502ded3c", 00:17:01.229 "is_configured": true, 00:17:01.229 "data_offset": 0, 00:17:01.229 "data_size": 65536 00:17:01.229 }, 00:17:01.229 { 00:17:01.229 "name": null, 00:17:01.229 "uuid": "ebb05827-1fc2-4461-8b19-1aaff6123ca8", 00:17:01.229 "is_configured": false, 00:17:01.229 "data_offset": 0, 00:17:01.229 "data_size": 65536 00:17:01.229 }, 00:17:01.229 { 00:17:01.229 "name": "BaseBdev3", 00:17:01.229 "uuid": "46d750de-a7b6-485b-90b3-0134972f9403", 00:17:01.229 "is_configured": true, 00:17:01.229 "data_offset": 0, 00:17:01.229 "data_size": 65536 00:17:01.229 } 00:17:01.229 ] 00:17:01.229 }' 00:17:01.229 14:16:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:01.229 14:16:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.796 14:16:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.796 14:16:38 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:01.796 14:16:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.796 14:16:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.796 14:16:38 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.796 14:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:01.796 14:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:01.796 14:16:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.796 14:16:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.796 [2024-11-27 14:16:39.020504] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:01.796 14:16:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.796 14:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:01.796 14:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:01.796 14:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:01.796 14:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:01.796 14:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:01.796 14:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:01.796 14:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:01.796 14:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:01.796 14:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:01.796 14:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:01.796 14:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:01.796 14:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:01.796 14:16:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.796 14:16:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:01.796 14:16:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.057 14:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.057 "name": "Existed_Raid", 00:17:02.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.057 "strip_size_kb": 64, 00:17:02.057 "state": "configuring", 00:17:02.057 "raid_level": "raid5f", 00:17:02.057 "superblock": false, 00:17:02.057 "num_base_bdevs": 3, 00:17:02.057 "num_base_bdevs_discovered": 1, 00:17:02.057 "num_base_bdevs_operational": 3, 00:17:02.057 "base_bdevs_list": [ 00:17:02.057 { 00:17:02.057 "name": "BaseBdev1", 00:17:02.057 "uuid": "917f78a9-c1b6-400c-89f3-83fa502ded3c", 00:17:02.057 "is_configured": true, 00:17:02.057 "data_offset": 0, 00:17:02.057 "data_size": 65536 00:17:02.057 }, 00:17:02.057 { 00:17:02.057 "name": null, 00:17:02.057 "uuid": "ebb05827-1fc2-4461-8b19-1aaff6123ca8", 00:17:02.057 "is_configured": false, 00:17:02.057 "data_offset": 0, 00:17:02.057 "data_size": 65536 00:17:02.057 }, 00:17:02.057 { 00:17:02.057 "name": null, 00:17:02.057 "uuid": "46d750de-a7b6-485b-90b3-0134972f9403", 00:17:02.057 "is_configured": false, 00:17:02.057 "data_offset": 0, 00:17:02.057 "data_size": 65536 00:17:02.057 } 00:17:02.057 ] 00:17:02.057 }' 00:17:02.057 14:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.057 14:16:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.318 14:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:02.318 14:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.318 14:16:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.318 14:16:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.318 14:16:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.577 14:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:02.577 14:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:02.577 14:16:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.577 14:16:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.577 [2024-11-27 14:16:39.612728] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:02.577 14:16:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.577 14:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:02.577 14:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:02.577 14:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:02.577 14:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:02.577 14:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:02.577 14:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:02.577 14:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:02.577 14:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:02.577 14:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:02.577 14:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:02.577 14:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:02.577 14:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:02.577 14:16:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.577 14:16:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:02.577 14:16:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.577 14:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:02.577 "name": "Existed_Raid", 00:17:02.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:02.578 "strip_size_kb": 64, 00:17:02.578 "state": "configuring", 00:17:02.578 "raid_level": "raid5f", 00:17:02.578 "superblock": false, 00:17:02.578 "num_base_bdevs": 3, 00:17:02.578 "num_base_bdevs_discovered": 2, 00:17:02.578 "num_base_bdevs_operational": 3, 00:17:02.578 "base_bdevs_list": [ 00:17:02.578 { 00:17:02.578 "name": "BaseBdev1", 00:17:02.578 "uuid": "917f78a9-c1b6-400c-89f3-83fa502ded3c", 00:17:02.578 "is_configured": true, 00:17:02.578 "data_offset": 0, 00:17:02.578 "data_size": 65536 00:17:02.578 }, 00:17:02.578 { 00:17:02.578 "name": null, 00:17:02.578 "uuid": "ebb05827-1fc2-4461-8b19-1aaff6123ca8", 00:17:02.578 "is_configured": false, 00:17:02.578 "data_offset": 0, 00:17:02.578 "data_size": 65536 00:17:02.578 }, 00:17:02.578 { 00:17:02.578 "name": "BaseBdev3", 00:17:02.578 "uuid": "46d750de-a7b6-485b-90b3-0134972f9403", 00:17:02.578 "is_configured": true, 00:17:02.578 "data_offset": 0, 00:17:02.578 "data_size": 65536 00:17:02.578 } 00:17:02.578 ] 00:17:02.578 }' 00:17:02.578 14:16:39 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:02.578 14:16:39 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.146 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:03.146 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.146 14:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.146 14:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.146 14:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.146 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:03.146 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:03.146 14:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.146 14:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.146 [2024-11-27 14:16:40.212977] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:03.146 14:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.146 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:03.146 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:03.146 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:03.146 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:03.146 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:03.146 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:03.146 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.146 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.146 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.146 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.146 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.146 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.146 14:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.146 14:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.146 14:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.146 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.146 "name": "Existed_Raid", 00:17:03.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.146 "strip_size_kb": 64, 00:17:03.146 "state": "configuring", 00:17:03.146 "raid_level": "raid5f", 00:17:03.146 "superblock": false, 00:17:03.146 "num_base_bdevs": 3, 00:17:03.146 "num_base_bdevs_discovered": 1, 00:17:03.146 "num_base_bdevs_operational": 3, 00:17:03.146 "base_bdevs_list": [ 00:17:03.146 { 00:17:03.146 "name": null, 00:17:03.146 "uuid": "917f78a9-c1b6-400c-89f3-83fa502ded3c", 00:17:03.146 "is_configured": false, 00:17:03.146 "data_offset": 0, 00:17:03.146 "data_size": 65536 00:17:03.146 }, 00:17:03.146 { 00:17:03.146 "name": null, 00:17:03.146 "uuid": "ebb05827-1fc2-4461-8b19-1aaff6123ca8", 00:17:03.146 "is_configured": false, 00:17:03.146 "data_offset": 0, 00:17:03.146 "data_size": 65536 00:17:03.146 }, 00:17:03.146 { 00:17:03.146 "name": "BaseBdev3", 00:17:03.146 "uuid": "46d750de-a7b6-485b-90b3-0134972f9403", 00:17:03.146 "is_configured": true, 00:17:03.146 "data_offset": 0, 00:17:03.146 "data_size": 65536 00:17:03.146 } 00:17:03.146 ] 00:17:03.146 }' 00:17:03.146 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.146 14:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.715 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.715 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:03.715 14:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.715 14:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.715 14:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.715 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:03.715 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:03.715 14:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.715 14:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.715 [2024-11-27 14:16:40.844263] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:03.715 14:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.715 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:03.715 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:03.715 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:03.715 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:03.715 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:03.715 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:03.715 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:03.715 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:03.715 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:03.715 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:03.715 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:03.715 14:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.715 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:03.715 14:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:03.715 14:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.715 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:03.715 "name": "Existed_Raid", 00:17:03.715 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:03.715 "strip_size_kb": 64, 00:17:03.715 "state": "configuring", 00:17:03.715 "raid_level": "raid5f", 00:17:03.715 "superblock": false, 00:17:03.715 "num_base_bdevs": 3, 00:17:03.715 "num_base_bdevs_discovered": 2, 00:17:03.715 "num_base_bdevs_operational": 3, 00:17:03.715 "base_bdevs_list": [ 00:17:03.715 { 00:17:03.715 "name": null, 00:17:03.715 "uuid": "917f78a9-c1b6-400c-89f3-83fa502ded3c", 00:17:03.715 "is_configured": false, 00:17:03.715 "data_offset": 0, 00:17:03.715 "data_size": 65536 00:17:03.715 }, 00:17:03.715 { 00:17:03.715 "name": "BaseBdev2", 00:17:03.715 "uuid": "ebb05827-1fc2-4461-8b19-1aaff6123ca8", 00:17:03.715 "is_configured": true, 00:17:03.715 "data_offset": 0, 00:17:03.715 "data_size": 65536 00:17:03.715 }, 00:17:03.715 { 00:17:03.715 "name": "BaseBdev3", 00:17:03.715 "uuid": "46d750de-a7b6-485b-90b3-0134972f9403", 00:17:03.715 "is_configured": true, 00:17:03.715 "data_offset": 0, 00:17:03.715 "data_size": 65536 00:17:03.715 } 00:17:03.715 ] 00:17:03.715 }' 00:17:03.715 14:16:40 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:03.715 14:16:40 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.285 14:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.285 14:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.285 14:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.285 14:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:04.285 14:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.285 14:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:04.285 14:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.285 14:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.285 14:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.285 14:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:04.285 14:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.285 14:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 917f78a9-c1b6-400c-89f3-83fa502ded3c 00:17:04.285 14:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.285 14:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.285 [2024-11-27 14:16:41.509163] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:04.285 [2024-11-27 14:16:41.509264] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:04.285 [2024-11-27 14:16:41.509280] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:04.285 [2024-11-27 14:16:41.509586] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:04.285 [2024-11-27 14:16:41.514019] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:04.285 [2024-11-27 14:16:41.514044] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:04.285 [2024-11-27 14:16:41.514390] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:04.285 NewBaseBdev 00:17:04.285 14:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.285 14:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:04.285 14:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:04.285 14:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:04.285 14:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:17:04.285 14:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:04.285 14:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:04.285 14:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:04.285 14:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.285 14:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.285 14:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.285 14:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:04.285 14:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.285 14:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.285 [ 00:17:04.285 { 00:17:04.285 "name": "NewBaseBdev", 00:17:04.285 "aliases": [ 00:17:04.285 "917f78a9-c1b6-400c-89f3-83fa502ded3c" 00:17:04.285 ], 00:17:04.285 "product_name": "Malloc disk", 00:17:04.285 "block_size": 512, 00:17:04.285 "num_blocks": 65536, 00:17:04.285 "uuid": "917f78a9-c1b6-400c-89f3-83fa502ded3c", 00:17:04.285 "assigned_rate_limits": { 00:17:04.285 "rw_ios_per_sec": 0, 00:17:04.285 "rw_mbytes_per_sec": 0, 00:17:04.285 "r_mbytes_per_sec": 0, 00:17:04.285 "w_mbytes_per_sec": 0 00:17:04.285 }, 00:17:04.285 "claimed": true, 00:17:04.285 "claim_type": "exclusive_write", 00:17:04.285 "zoned": false, 00:17:04.285 "supported_io_types": { 00:17:04.285 "read": true, 00:17:04.285 "write": true, 00:17:04.285 "unmap": true, 00:17:04.285 "flush": true, 00:17:04.285 "reset": true, 00:17:04.285 "nvme_admin": false, 00:17:04.285 "nvme_io": false, 00:17:04.285 "nvme_io_md": false, 00:17:04.285 "write_zeroes": true, 00:17:04.285 "zcopy": true, 00:17:04.285 "get_zone_info": false, 00:17:04.285 "zone_management": false, 00:17:04.285 "zone_append": false, 00:17:04.285 "compare": false, 00:17:04.285 "compare_and_write": false, 00:17:04.285 "abort": true, 00:17:04.285 "seek_hole": false, 00:17:04.285 "seek_data": false, 00:17:04.285 "copy": true, 00:17:04.285 "nvme_iov_md": false 00:17:04.285 }, 00:17:04.285 "memory_domains": [ 00:17:04.285 { 00:17:04.285 "dma_device_id": "system", 00:17:04.285 "dma_device_type": 1 00:17:04.285 }, 00:17:04.285 { 00:17:04.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.285 "dma_device_type": 2 00:17:04.545 } 00:17:04.545 ], 00:17:04.545 "driver_specific": {} 00:17:04.545 } 00:17:04.545 ] 00:17:04.545 14:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.545 14:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:17:04.545 14:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:04.545 14:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:04.545 14:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:04.545 14:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:04.545 14:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:04.545 14:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:04.545 14:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:04.545 14:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:04.545 14:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:04.545 14:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:04.545 14:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:04.545 14:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:04.545 14:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.545 14:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.545 14:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.545 14:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:04.545 "name": "Existed_Raid", 00:17:04.545 "uuid": "1678e97b-eaab-4087-b1a0-9ddf725bc425", 00:17:04.545 "strip_size_kb": 64, 00:17:04.545 "state": "online", 00:17:04.545 "raid_level": "raid5f", 00:17:04.545 "superblock": false, 00:17:04.545 "num_base_bdevs": 3, 00:17:04.545 "num_base_bdevs_discovered": 3, 00:17:04.545 "num_base_bdevs_operational": 3, 00:17:04.545 "base_bdevs_list": [ 00:17:04.545 { 00:17:04.545 "name": "NewBaseBdev", 00:17:04.545 "uuid": "917f78a9-c1b6-400c-89f3-83fa502ded3c", 00:17:04.545 "is_configured": true, 00:17:04.545 "data_offset": 0, 00:17:04.545 "data_size": 65536 00:17:04.545 }, 00:17:04.545 { 00:17:04.545 "name": "BaseBdev2", 00:17:04.545 "uuid": "ebb05827-1fc2-4461-8b19-1aaff6123ca8", 00:17:04.545 "is_configured": true, 00:17:04.545 "data_offset": 0, 00:17:04.545 "data_size": 65536 00:17:04.545 }, 00:17:04.545 { 00:17:04.545 "name": "BaseBdev3", 00:17:04.545 "uuid": "46d750de-a7b6-485b-90b3-0134972f9403", 00:17:04.545 "is_configured": true, 00:17:04.545 "data_offset": 0, 00:17:04.545 "data_size": 65536 00:17:04.545 } 00:17:04.545 ] 00:17:04.545 }' 00:17:04.545 14:16:41 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:04.545 14:16:41 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.805 14:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:04.805 14:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:04.805 14:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:04.805 14:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:04.805 14:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:04.805 14:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:04.805 14:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:04.805 14:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.805 14:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:04.805 14:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:04.805 [2024-11-27 14:16:42.060546] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:04.805 14:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.064 14:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:05.064 "name": "Existed_Raid", 00:17:05.064 "aliases": [ 00:17:05.064 "1678e97b-eaab-4087-b1a0-9ddf725bc425" 00:17:05.064 ], 00:17:05.064 "product_name": "Raid Volume", 00:17:05.064 "block_size": 512, 00:17:05.064 "num_blocks": 131072, 00:17:05.064 "uuid": "1678e97b-eaab-4087-b1a0-9ddf725bc425", 00:17:05.064 "assigned_rate_limits": { 00:17:05.064 "rw_ios_per_sec": 0, 00:17:05.064 "rw_mbytes_per_sec": 0, 00:17:05.064 "r_mbytes_per_sec": 0, 00:17:05.064 "w_mbytes_per_sec": 0 00:17:05.064 }, 00:17:05.064 "claimed": false, 00:17:05.064 "zoned": false, 00:17:05.064 "supported_io_types": { 00:17:05.064 "read": true, 00:17:05.064 "write": true, 00:17:05.064 "unmap": false, 00:17:05.064 "flush": false, 00:17:05.064 "reset": true, 00:17:05.064 "nvme_admin": false, 00:17:05.064 "nvme_io": false, 00:17:05.064 "nvme_io_md": false, 00:17:05.064 "write_zeroes": true, 00:17:05.064 "zcopy": false, 00:17:05.064 "get_zone_info": false, 00:17:05.064 "zone_management": false, 00:17:05.064 "zone_append": false, 00:17:05.064 "compare": false, 00:17:05.064 "compare_and_write": false, 00:17:05.064 "abort": false, 00:17:05.064 "seek_hole": false, 00:17:05.064 "seek_data": false, 00:17:05.064 "copy": false, 00:17:05.064 "nvme_iov_md": false 00:17:05.064 }, 00:17:05.064 "driver_specific": { 00:17:05.064 "raid": { 00:17:05.064 "uuid": "1678e97b-eaab-4087-b1a0-9ddf725bc425", 00:17:05.064 "strip_size_kb": 64, 00:17:05.064 "state": "online", 00:17:05.064 "raid_level": "raid5f", 00:17:05.064 "superblock": false, 00:17:05.064 "num_base_bdevs": 3, 00:17:05.064 "num_base_bdevs_discovered": 3, 00:17:05.064 "num_base_bdevs_operational": 3, 00:17:05.064 "base_bdevs_list": [ 00:17:05.064 { 00:17:05.064 "name": "NewBaseBdev", 00:17:05.064 "uuid": "917f78a9-c1b6-400c-89f3-83fa502ded3c", 00:17:05.064 "is_configured": true, 00:17:05.064 "data_offset": 0, 00:17:05.064 "data_size": 65536 00:17:05.064 }, 00:17:05.064 { 00:17:05.064 "name": "BaseBdev2", 00:17:05.064 "uuid": "ebb05827-1fc2-4461-8b19-1aaff6123ca8", 00:17:05.064 "is_configured": true, 00:17:05.064 "data_offset": 0, 00:17:05.064 "data_size": 65536 00:17:05.064 }, 00:17:05.064 { 00:17:05.064 "name": "BaseBdev3", 00:17:05.064 "uuid": "46d750de-a7b6-485b-90b3-0134972f9403", 00:17:05.064 "is_configured": true, 00:17:05.064 "data_offset": 0, 00:17:05.064 "data_size": 65536 00:17:05.064 } 00:17:05.064 ] 00:17:05.064 } 00:17:05.064 } 00:17:05.064 }' 00:17:05.064 14:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:05.064 14:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:05.064 BaseBdev2 00:17:05.064 BaseBdev3' 00:17:05.064 14:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:05.064 14:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:05.064 14:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:05.064 14:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:05.064 14:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:05.064 14:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.064 14:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.064 14:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.064 14:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:05.065 14:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:05.065 14:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:05.065 14:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:05.065 14:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.065 14:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.065 14:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:05.065 14:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.065 14:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:05.065 14:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:05.065 14:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:05.324 14:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:05.324 14:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:05.324 14:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.324 14:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.324 14:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.324 14:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:05.324 14:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:05.324 14:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:05.324 14:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.324 14:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:05.324 [2024-11-27 14:16:42.400414] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:05.324 [2024-11-27 14:16:42.400448] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:05.324 [2024-11-27 14:16:42.400546] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:05.324 [2024-11-27 14:16:42.400927] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:05.324 [2024-11-27 14:16:42.400960] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:05.324 14:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.324 14:16:42 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 80133 00:17:05.324 14:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 80133 ']' 00:17:05.324 14:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 80133 00:17:05.324 14:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:17:05.324 14:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:05.324 14:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80133 00:17:05.324 14:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:05.324 14:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:05.324 killing process with pid 80133 00:17:05.324 14:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80133' 00:17:05.325 14:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 80133 00:17:05.325 [2024-11-27 14:16:42.440451] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:05.325 14:16:42 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 80133 00:17:05.584 [2024-11-27 14:16:42.687262] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:17:06.521 00:17:06.521 real 0m11.944s 00:17:06.521 user 0m19.922s 00:17:06.521 sys 0m1.654s 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:06.521 ************************************ 00:17:06.521 END TEST raid5f_state_function_test 00:17:06.521 ************************************ 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:17:06.521 14:16:43 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 3 true 00:17:06.521 14:16:43 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:06.521 14:16:43 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:06.521 14:16:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:06.521 ************************************ 00:17:06.521 START TEST raid5f_state_function_test_sb 00:17:06.521 ************************************ 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 3 true 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=3 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=80766 00:17:06.521 Process raid pid: 80766 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 80766' 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 80766 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 80766 ']' 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:06.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:06.521 14:16:43 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:06.780 [2024-11-27 14:16:43.876251] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:17:06.780 [2024-11-27 14:16:43.876436] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:07.039 [2024-11-27 14:16:44.057350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:07.040 [2024-11-27 14:16:44.178824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.298 [2024-11-27 14:16:44.375948] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:07.298 [2024-11-27 14:16:44.375988] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:07.868 14:16:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:07.868 14:16:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:07.868 14:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:07.868 14:16:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.868 14:16:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.868 [2024-11-27 14:16:44.916619] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:07.868 [2024-11-27 14:16:44.916678] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:07.868 [2024-11-27 14:16:44.916695] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:07.868 [2024-11-27 14:16:44.916709] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:07.868 [2024-11-27 14:16:44.916718] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:07.868 [2024-11-27 14:16:44.916730] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:07.868 14:16:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.868 14:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:07.868 14:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:07.868 14:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:07.868 14:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:07.868 14:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:07.868 14:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:07.868 14:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:07.868 14:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:07.868 14:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:07.868 14:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:07.868 14:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:07.868 14:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:07.868 14:16:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.868 14:16:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:07.868 14:16:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.868 14:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:07.868 "name": "Existed_Raid", 00:17:07.868 "uuid": "bb78a93c-194e-4679-9841-a7d18f32099a", 00:17:07.868 "strip_size_kb": 64, 00:17:07.868 "state": "configuring", 00:17:07.868 "raid_level": "raid5f", 00:17:07.868 "superblock": true, 00:17:07.868 "num_base_bdevs": 3, 00:17:07.868 "num_base_bdevs_discovered": 0, 00:17:07.868 "num_base_bdevs_operational": 3, 00:17:07.868 "base_bdevs_list": [ 00:17:07.868 { 00:17:07.868 "name": "BaseBdev1", 00:17:07.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.868 "is_configured": false, 00:17:07.868 "data_offset": 0, 00:17:07.868 "data_size": 0 00:17:07.868 }, 00:17:07.868 { 00:17:07.868 "name": "BaseBdev2", 00:17:07.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.868 "is_configured": false, 00:17:07.868 "data_offset": 0, 00:17:07.868 "data_size": 0 00:17:07.868 }, 00:17:07.868 { 00:17:07.868 "name": "BaseBdev3", 00:17:07.868 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.868 "is_configured": false, 00:17:07.868 "data_offset": 0, 00:17:07.868 "data_size": 0 00:17:07.868 } 00:17:07.868 ] 00:17:07.868 }' 00:17:07.868 14:16:44 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:07.868 14:16:44 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.435 14:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:08.435 14:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.435 14:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.435 [2024-11-27 14:16:45.452708] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:08.435 [2024-11-27 14:16:45.452748] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:17:08.435 14:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.435 14:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:08.435 14:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.435 14:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.435 [2024-11-27 14:16:45.460703] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:08.435 [2024-11-27 14:16:45.460770] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:08.435 [2024-11-27 14:16:45.460812] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:08.435 [2024-11-27 14:16:45.460831] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:08.435 [2024-11-27 14:16:45.460840] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:08.435 [2024-11-27 14:16:45.460854] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:08.435 14:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.435 14:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:08.435 14:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.435 14:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.435 [2024-11-27 14:16:45.504762] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:08.435 BaseBdev1 00:17:08.435 14:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.435 14:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:17:08.435 14:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:08.435 14:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:08.435 14:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:08.435 14:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:08.435 14:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:08.435 14:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:08.435 14:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.435 14:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.435 14:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.435 14:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:08.435 14:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.435 14:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.435 [ 00:17:08.435 { 00:17:08.435 "name": "BaseBdev1", 00:17:08.435 "aliases": [ 00:17:08.435 "5804730e-b067-40fb-84b7-96890c7f0b27" 00:17:08.435 ], 00:17:08.435 "product_name": "Malloc disk", 00:17:08.435 "block_size": 512, 00:17:08.435 "num_blocks": 65536, 00:17:08.435 "uuid": "5804730e-b067-40fb-84b7-96890c7f0b27", 00:17:08.435 "assigned_rate_limits": { 00:17:08.435 "rw_ios_per_sec": 0, 00:17:08.435 "rw_mbytes_per_sec": 0, 00:17:08.435 "r_mbytes_per_sec": 0, 00:17:08.435 "w_mbytes_per_sec": 0 00:17:08.435 }, 00:17:08.435 "claimed": true, 00:17:08.435 "claim_type": "exclusive_write", 00:17:08.435 "zoned": false, 00:17:08.436 "supported_io_types": { 00:17:08.436 "read": true, 00:17:08.436 "write": true, 00:17:08.436 "unmap": true, 00:17:08.436 "flush": true, 00:17:08.436 "reset": true, 00:17:08.436 "nvme_admin": false, 00:17:08.436 "nvme_io": false, 00:17:08.436 "nvme_io_md": false, 00:17:08.436 "write_zeroes": true, 00:17:08.436 "zcopy": true, 00:17:08.436 "get_zone_info": false, 00:17:08.436 "zone_management": false, 00:17:08.436 "zone_append": false, 00:17:08.436 "compare": false, 00:17:08.436 "compare_and_write": false, 00:17:08.436 "abort": true, 00:17:08.436 "seek_hole": false, 00:17:08.436 "seek_data": false, 00:17:08.436 "copy": true, 00:17:08.436 "nvme_iov_md": false 00:17:08.436 }, 00:17:08.436 "memory_domains": [ 00:17:08.436 { 00:17:08.436 "dma_device_id": "system", 00:17:08.436 "dma_device_type": 1 00:17:08.436 }, 00:17:08.436 { 00:17:08.436 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:08.436 "dma_device_type": 2 00:17:08.436 } 00:17:08.436 ], 00:17:08.436 "driver_specific": {} 00:17:08.436 } 00:17:08.436 ] 00:17:08.436 14:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.436 14:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:08.436 14:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:08.436 14:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:08.436 14:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:08.436 14:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:08.436 14:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:08.436 14:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:08.436 14:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:08.436 14:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:08.436 14:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:08.436 14:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:08.436 14:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:08.436 14:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:08.436 14:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.436 14:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:08.436 14:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.436 14:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:08.436 "name": "Existed_Raid", 00:17:08.436 "uuid": "ca9a8d6d-b099-4ad8-8a94-a6159ead3696", 00:17:08.436 "strip_size_kb": 64, 00:17:08.436 "state": "configuring", 00:17:08.436 "raid_level": "raid5f", 00:17:08.436 "superblock": true, 00:17:08.436 "num_base_bdevs": 3, 00:17:08.436 "num_base_bdevs_discovered": 1, 00:17:08.436 "num_base_bdevs_operational": 3, 00:17:08.436 "base_bdevs_list": [ 00:17:08.436 { 00:17:08.436 "name": "BaseBdev1", 00:17:08.436 "uuid": "5804730e-b067-40fb-84b7-96890c7f0b27", 00:17:08.436 "is_configured": true, 00:17:08.436 "data_offset": 2048, 00:17:08.436 "data_size": 63488 00:17:08.436 }, 00:17:08.436 { 00:17:08.436 "name": "BaseBdev2", 00:17:08.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.436 "is_configured": false, 00:17:08.436 "data_offset": 0, 00:17:08.436 "data_size": 0 00:17:08.436 }, 00:17:08.436 { 00:17:08.436 "name": "BaseBdev3", 00:17:08.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:08.436 "is_configured": false, 00:17:08.436 "data_offset": 0, 00:17:08.436 "data_size": 0 00:17:08.436 } 00:17:08.436 ] 00:17:08.436 }' 00:17:08.436 14:16:45 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:08.436 14:16:45 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.003 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:09.003 14:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.003 14:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.004 [2024-11-27 14:16:46.048999] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:09.004 [2024-11-27 14:16:46.049058] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:17:09.004 14:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.004 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:09.004 14:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.004 14:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.004 [2024-11-27 14:16:46.057044] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:09.004 [2024-11-27 14:16:46.059435] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:09.004 [2024-11-27 14:16:46.059720] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:09.004 [2024-11-27 14:16:46.059747] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:17:09.004 [2024-11-27 14:16:46.059765] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:17:09.004 14:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.004 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:17:09.004 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:09.004 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:09.004 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:09.004 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:09.004 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:09.004 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.004 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:09.004 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.004 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.004 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.004 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.004 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.004 14:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.004 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.004 14:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.004 14:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.004 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.004 "name": "Existed_Raid", 00:17:09.004 "uuid": "3b825451-26b6-4f13-b437-6c9676bf42ce", 00:17:09.004 "strip_size_kb": 64, 00:17:09.004 "state": "configuring", 00:17:09.004 "raid_level": "raid5f", 00:17:09.004 "superblock": true, 00:17:09.004 "num_base_bdevs": 3, 00:17:09.004 "num_base_bdevs_discovered": 1, 00:17:09.004 "num_base_bdevs_operational": 3, 00:17:09.004 "base_bdevs_list": [ 00:17:09.004 { 00:17:09.004 "name": "BaseBdev1", 00:17:09.004 "uuid": "5804730e-b067-40fb-84b7-96890c7f0b27", 00:17:09.004 "is_configured": true, 00:17:09.004 "data_offset": 2048, 00:17:09.004 "data_size": 63488 00:17:09.004 }, 00:17:09.004 { 00:17:09.004 "name": "BaseBdev2", 00:17:09.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.004 "is_configured": false, 00:17:09.004 "data_offset": 0, 00:17:09.004 "data_size": 0 00:17:09.004 }, 00:17:09.004 { 00:17:09.004 "name": "BaseBdev3", 00:17:09.004 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.004 "is_configured": false, 00:17:09.004 "data_offset": 0, 00:17:09.004 "data_size": 0 00:17:09.004 } 00:17:09.004 ] 00:17:09.004 }' 00:17:09.004 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.004 14:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.571 [2024-11-27 14:16:46.587948] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:09.571 BaseBdev2 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.571 [ 00:17:09.571 { 00:17:09.571 "name": "BaseBdev2", 00:17:09.571 "aliases": [ 00:17:09.571 "7432a82d-3b14-4337-b178-6385c3792be2" 00:17:09.571 ], 00:17:09.571 "product_name": "Malloc disk", 00:17:09.571 "block_size": 512, 00:17:09.571 "num_blocks": 65536, 00:17:09.571 "uuid": "7432a82d-3b14-4337-b178-6385c3792be2", 00:17:09.571 "assigned_rate_limits": { 00:17:09.571 "rw_ios_per_sec": 0, 00:17:09.571 "rw_mbytes_per_sec": 0, 00:17:09.571 "r_mbytes_per_sec": 0, 00:17:09.571 "w_mbytes_per_sec": 0 00:17:09.571 }, 00:17:09.571 "claimed": true, 00:17:09.571 "claim_type": "exclusive_write", 00:17:09.571 "zoned": false, 00:17:09.571 "supported_io_types": { 00:17:09.571 "read": true, 00:17:09.571 "write": true, 00:17:09.571 "unmap": true, 00:17:09.571 "flush": true, 00:17:09.571 "reset": true, 00:17:09.571 "nvme_admin": false, 00:17:09.571 "nvme_io": false, 00:17:09.571 "nvme_io_md": false, 00:17:09.571 "write_zeroes": true, 00:17:09.571 "zcopy": true, 00:17:09.571 "get_zone_info": false, 00:17:09.571 "zone_management": false, 00:17:09.571 "zone_append": false, 00:17:09.571 "compare": false, 00:17:09.571 "compare_and_write": false, 00:17:09.571 "abort": true, 00:17:09.571 "seek_hole": false, 00:17:09.571 "seek_data": false, 00:17:09.571 "copy": true, 00:17:09.571 "nvme_iov_md": false 00:17:09.571 }, 00:17:09.571 "memory_domains": [ 00:17:09.571 { 00:17:09.571 "dma_device_id": "system", 00:17:09.571 "dma_device_type": 1 00:17:09.571 }, 00:17:09.571 { 00:17:09.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:09.571 "dma_device_type": 2 00:17:09.571 } 00:17:09.571 ], 00:17:09.571 "driver_specific": {} 00:17:09.571 } 00:17:09.571 ] 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.571 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:09.571 "name": "Existed_Raid", 00:17:09.571 "uuid": "3b825451-26b6-4f13-b437-6c9676bf42ce", 00:17:09.571 "strip_size_kb": 64, 00:17:09.571 "state": "configuring", 00:17:09.571 "raid_level": "raid5f", 00:17:09.571 "superblock": true, 00:17:09.571 "num_base_bdevs": 3, 00:17:09.571 "num_base_bdevs_discovered": 2, 00:17:09.571 "num_base_bdevs_operational": 3, 00:17:09.571 "base_bdevs_list": [ 00:17:09.571 { 00:17:09.571 "name": "BaseBdev1", 00:17:09.571 "uuid": "5804730e-b067-40fb-84b7-96890c7f0b27", 00:17:09.571 "is_configured": true, 00:17:09.571 "data_offset": 2048, 00:17:09.571 "data_size": 63488 00:17:09.571 }, 00:17:09.571 { 00:17:09.571 "name": "BaseBdev2", 00:17:09.571 "uuid": "7432a82d-3b14-4337-b178-6385c3792be2", 00:17:09.572 "is_configured": true, 00:17:09.572 "data_offset": 2048, 00:17:09.572 "data_size": 63488 00:17:09.572 }, 00:17:09.572 { 00:17:09.572 "name": "BaseBdev3", 00:17:09.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.572 "is_configured": false, 00:17:09.572 "data_offset": 0, 00:17:09.572 "data_size": 0 00:17:09.572 } 00:17:09.572 ] 00:17:09.572 }' 00:17:09.572 14:16:46 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:09.572 14:16:46 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.140 [2024-11-27 14:16:47.184072] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:10.140 [2024-11-27 14:16:47.184397] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:10.140 [2024-11-27 14:16:47.184439] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:10.140 [2024-11-27 14:16:47.184773] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:10.140 BaseBdev3 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.140 [2024-11-27 14:16:47.189929] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:10.140 [2024-11-27 14:16:47.189958] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:17:10.140 [2024-11-27 14:16:47.190298] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.140 [ 00:17:10.140 { 00:17:10.140 "name": "BaseBdev3", 00:17:10.140 "aliases": [ 00:17:10.140 "31061952-2d90-47ae-993a-a111d8bd0b0b" 00:17:10.140 ], 00:17:10.140 "product_name": "Malloc disk", 00:17:10.140 "block_size": 512, 00:17:10.140 "num_blocks": 65536, 00:17:10.140 "uuid": "31061952-2d90-47ae-993a-a111d8bd0b0b", 00:17:10.140 "assigned_rate_limits": { 00:17:10.140 "rw_ios_per_sec": 0, 00:17:10.140 "rw_mbytes_per_sec": 0, 00:17:10.140 "r_mbytes_per_sec": 0, 00:17:10.140 "w_mbytes_per_sec": 0 00:17:10.140 }, 00:17:10.140 "claimed": true, 00:17:10.140 "claim_type": "exclusive_write", 00:17:10.140 "zoned": false, 00:17:10.140 "supported_io_types": { 00:17:10.140 "read": true, 00:17:10.140 "write": true, 00:17:10.140 "unmap": true, 00:17:10.140 "flush": true, 00:17:10.140 "reset": true, 00:17:10.140 "nvme_admin": false, 00:17:10.140 "nvme_io": false, 00:17:10.140 "nvme_io_md": false, 00:17:10.140 "write_zeroes": true, 00:17:10.140 "zcopy": true, 00:17:10.140 "get_zone_info": false, 00:17:10.140 "zone_management": false, 00:17:10.140 "zone_append": false, 00:17:10.140 "compare": false, 00:17:10.140 "compare_and_write": false, 00:17:10.140 "abort": true, 00:17:10.140 "seek_hole": false, 00:17:10.140 "seek_data": false, 00:17:10.140 "copy": true, 00:17:10.140 "nvme_iov_md": false 00:17:10.140 }, 00:17:10.140 "memory_domains": [ 00:17:10.140 { 00:17:10.140 "dma_device_id": "system", 00:17:10.140 "dma_device_type": 1 00:17:10.140 }, 00:17:10.140 { 00:17:10.140 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:10.140 "dma_device_type": 2 00:17:10.140 } 00:17:10.140 ], 00:17:10.140 "driver_specific": {} 00:17:10.140 } 00:17:10.140 ] 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.140 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.140 "name": "Existed_Raid", 00:17:10.140 "uuid": "3b825451-26b6-4f13-b437-6c9676bf42ce", 00:17:10.140 "strip_size_kb": 64, 00:17:10.140 "state": "online", 00:17:10.140 "raid_level": "raid5f", 00:17:10.140 "superblock": true, 00:17:10.140 "num_base_bdevs": 3, 00:17:10.140 "num_base_bdevs_discovered": 3, 00:17:10.140 "num_base_bdevs_operational": 3, 00:17:10.140 "base_bdevs_list": [ 00:17:10.140 { 00:17:10.140 "name": "BaseBdev1", 00:17:10.141 "uuid": "5804730e-b067-40fb-84b7-96890c7f0b27", 00:17:10.141 "is_configured": true, 00:17:10.141 "data_offset": 2048, 00:17:10.141 "data_size": 63488 00:17:10.141 }, 00:17:10.141 { 00:17:10.141 "name": "BaseBdev2", 00:17:10.141 "uuid": "7432a82d-3b14-4337-b178-6385c3792be2", 00:17:10.141 "is_configured": true, 00:17:10.141 "data_offset": 2048, 00:17:10.141 "data_size": 63488 00:17:10.141 }, 00:17:10.141 { 00:17:10.141 "name": "BaseBdev3", 00:17:10.141 "uuid": "31061952-2d90-47ae-993a-a111d8bd0b0b", 00:17:10.141 "is_configured": true, 00:17:10.141 "data_offset": 2048, 00:17:10.141 "data_size": 63488 00:17:10.141 } 00:17:10.141 ] 00:17:10.141 }' 00:17:10.141 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.141 14:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.709 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:17:10.709 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:10.709 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:10.709 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:10.709 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:10.709 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:10.709 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:10.709 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:10.709 14:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.709 14:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.709 [2024-11-27 14:16:47.744147] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:10.709 14:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.709 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:10.709 "name": "Existed_Raid", 00:17:10.709 "aliases": [ 00:17:10.709 "3b825451-26b6-4f13-b437-6c9676bf42ce" 00:17:10.709 ], 00:17:10.709 "product_name": "Raid Volume", 00:17:10.709 "block_size": 512, 00:17:10.709 "num_blocks": 126976, 00:17:10.709 "uuid": "3b825451-26b6-4f13-b437-6c9676bf42ce", 00:17:10.709 "assigned_rate_limits": { 00:17:10.709 "rw_ios_per_sec": 0, 00:17:10.709 "rw_mbytes_per_sec": 0, 00:17:10.709 "r_mbytes_per_sec": 0, 00:17:10.709 "w_mbytes_per_sec": 0 00:17:10.709 }, 00:17:10.709 "claimed": false, 00:17:10.709 "zoned": false, 00:17:10.709 "supported_io_types": { 00:17:10.709 "read": true, 00:17:10.709 "write": true, 00:17:10.709 "unmap": false, 00:17:10.709 "flush": false, 00:17:10.709 "reset": true, 00:17:10.709 "nvme_admin": false, 00:17:10.709 "nvme_io": false, 00:17:10.709 "nvme_io_md": false, 00:17:10.709 "write_zeroes": true, 00:17:10.709 "zcopy": false, 00:17:10.709 "get_zone_info": false, 00:17:10.709 "zone_management": false, 00:17:10.709 "zone_append": false, 00:17:10.709 "compare": false, 00:17:10.709 "compare_and_write": false, 00:17:10.709 "abort": false, 00:17:10.709 "seek_hole": false, 00:17:10.709 "seek_data": false, 00:17:10.709 "copy": false, 00:17:10.709 "nvme_iov_md": false 00:17:10.709 }, 00:17:10.709 "driver_specific": { 00:17:10.709 "raid": { 00:17:10.709 "uuid": "3b825451-26b6-4f13-b437-6c9676bf42ce", 00:17:10.709 "strip_size_kb": 64, 00:17:10.709 "state": "online", 00:17:10.709 "raid_level": "raid5f", 00:17:10.709 "superblock": true, 00:17:10.709 "num_base_bdevs": 3, 00:17:10.709 "num_base_bdevs_discovered": 3, 00:17:10.709 "num_base_bdevs_operational": 3, 00:17:10.709 "base_bdevs_list": [ 00:17:10.709 { 00:17:10.709 "name": "BaseBdev1", 00:17:10.709 "uuid": "5804730e-b067-40fb-84b7-96890c7f0b27", 00:17:10.709 "is_configured": true, 00:17:10.709 "data_offset": 2048, 00:17:10.709 "data_size": 63488 00:17:10.709 }, 00:17:10.709 { 00:17:10.709 "name": "BaseBdev2", 00:17:10.709 "uuid": "7432a82d-3b14-4337-b178-6385c3792be2", 00:17:10.709 "is_configured": true, 00:17:10.709 "data_offset": 2048, 00:17:10.709 "data_size": 63488 00:17:10.709 }, 00:17:10.709 { 00:17:10.709 "name": "BaseBdev3", 00:17:10.709 "uuid": "31061952-2d90-47ae-993a-a111d8bd0b0b", 00:17:10.709 "is_configured": true, 00:17:10.709 "data_offset": 2048, 00:17:10.709 "data_size": 63488 00:17:10.709 } 00:17:10.710 ] 00:17:10.710 } 00:17:10.710 } 00:17:10.710 }' 00:17:10.710 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:10.710 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:17:10.710 BaseBdev2 00:17:10.710 BaseBdev3' 00:17:10.710 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:10.710 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:10.710 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:10.710 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:17:10.710 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:10.710 14:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.710 14:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.710 14:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.710 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:10.710 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:10.710 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:10.710 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:10.710 14:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.710 14:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.710 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:10.710 14:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.969 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:10.969 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:10.969 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:10.969 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:10.969 14:16:47 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:10.969 14:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.969 14:16:47 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.969 14:16:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.969 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:10.969 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:10.969 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:10.969 14:16:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.969 14:16:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.969 [2024-11-27 14:16:48.048073] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:10.969 14:16:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.969 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:17:10.969 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:17:10.969 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:10.969 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:17:10.969 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:17:10.969 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 2 00:17:10.969 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:10.969 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:10.969 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:10.969 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:10.969 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:10.969 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:10.969 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:10.969 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:10.969 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:10.969 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:10.969 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:10.969 14:16:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.969 14:16:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:10.969 14:16:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.969 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:10.969 "name": "Existed_Raid", 00:17:10.969 "uuid": "3b825451-26b6-4f13-b437-6c9676bf42ce", 00:17:10.969 "strip_size_kb": 64, 00:17:10.969 "state": "online", 00:17:10.969 "raid_level": "raid5f", 00:17:10.969 "superblock": true, 00:17:10.969 "num_base_bdevs": 3, 00:17:10.969 "num_base_bdevs_discovered": 2, 00:17:10.969 "num_base_bdevs_operational": 2, 00:17:10.969 "base_bdevs_list": [ 00:17:10.969 { 00:17:10.969 "name": null, 00:17:10.969 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:10.969 "is_configured": false, 00:17:10.969 "data_offset": 0, 00:17:10.969 "data_size": 63488 00:17:10.969 }, 00:17:10.969 { 00:17:10.969 "name": "BaseBdev2", 00:17:10.969 "uuid": "7432a82d-3b14-4337-b178-6385c3792be2", 00:17:10.969 "is_configured": true, 00:17:10.969 "data_offset": 2048, 00:17:10.969 "data_size": 63488 00:17:10.969 }, 00:17:10.969 { 00:17:10.969 "name": "BaseBdev3", 00:17:10.969 "uuid": "31061952-2d90-47ae-993a-a111d8bd0b0b", 00:17:10.969 "is_configured": true, 00:17:10.969 "data_offset": 2048, 00:17:10.970 "data_size": 63488 00:17:10.970 } 00:17:10.970 ] 00:17:10.970 }' 00:17:10.970 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:10.970 14:16:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.538 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:17:11.538 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:11.538 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.538 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:11.538 14:16:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.538 14:16:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.538 14:16:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.538 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:11.538 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:11.538 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:17:11.538 14:16:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.538 14:16:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.538 [2024-11-27 14:16:48.713511] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:11.538 [2024-11-27 14:16:48.713715] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:11.538 [2024-11-27 14:16:48.793047] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:11.538 14:16:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.538 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:11.538 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:11.538 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.538 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:17:11.538 14:16:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.538 14:16:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.796 14:16:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.796 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:17:11.796 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:11.796 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:17:11.796 14:16:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.796 14:16:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.796 [2024-11-27 14:16:48.853149] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:11.796 [2024-11-27 14:16:48.853252] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:17:11.796 14:16:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.796 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:17:11.797 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:17:11.797 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:11.797 14:16:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.797 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:17:11.797 14:16:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.797 14:16:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.797 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:17:11.797 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:17:11.797 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 3 -gt 2 ']' 00:17:11.797 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:17:11.797 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:11.797 14:16:48 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:17:11.797 14:16:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.797 14:16:48 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.797 BaseBdev2 00:17:11.797 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.797 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:17:11.797 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:17:11.797 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:11.797 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:11.797 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:11.797 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:11.797 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:11.797 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.797 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.797 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.797 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:11.797 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.797 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:11.797 [ 00:17:11.797 { 00:17:11.797 "name": "BaseBdev2", 00:17:11.797 "aliases": [ 00:17:11.797 "f0b28554-01f6-4ba5-b7a8-71e186ae9f88" 00:17:11.797 ], 00:17:11.797 "product_name": "Malloc disk", 00:17:11.797 "block_size": 512, 00:17:11.797 "num_blocks": 65536, 00:17:11.797 "uuid": "f0b28554-01f6-4ba5-b7a8-71e186ae9f88", 00:17:11.797 "assigned_rate_limits": { 00:17:11.797 "rw_ios_per_sec": 0, 00:17:11.797 "rw_mbytes_per_sec": 0, 00:17:11.797 "r_mbytes_per_sec": 0, 00:17:11.797 "w_mbytes_per_sec": 0 00:17:11.797 }, 00:17:11.797 "claimed": false, 00:17:11.797 "zoned": false, 00:17:11.797 "supported_io_types": { 00:17:11.797 "read": true, 00:17:11.797 "write": true, 00:17:11.797 "unmap": true, 00:17:11.797 "flush": true, 00:17:11.797 "reset": true, 00:17:11.797 "nvme_admin": false, 00:17:11.797 "nvme_io": false, 00:17:11.797 "nvme_io_md": false, 00:17:11.797 "write_zeroes": true, 00:17:11.797 "zcopy": true, 00:17:11.797 "get_zone_info": false, 00:17:11.797 "zone_management": false, 00:17:11.797 "zone_append": false, 00:17:11.797 "compare": false, 00:17:11.797 "compare_and_write": false, 00:17:11.797 "abort": true, 00:17:11.797 "seek_hole": false, 00:17:11.797 "seek_data": false, 00:17:11.797 "copy": true, 00:17:11.797 "nvme_iov_md": false 00:17:11.797 }, 00:17:11.797 "memory_domains": [ 00:17:11.797 { 00:17:11.797 "dma_device_id": "system", 00:17:11.797 "dma_device_type": 1 00:17:11.797 }, 00:17:11.797 { 00:17:11.797 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:11.797 "dma_device_type": 2 00:17:11.797 } 00:17:11.797 ], 00:17:11.797 "driver_specific": {} 00:17:11.797 } 00:17:11.797 ] 00:17:11.797 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.797 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:11.797 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:11.797 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:11.797 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:17:11.797 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.797 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.056 BaseBdev3 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.056 [ 00:17:12.056 { 00:17:12.056 "name": "BaseBdev3", 00:17:12.056 "aliases": [ 00:17:12.056 "2a7ce9f7-9074-4219-9853-9148d7685503" 00:17:12.056 ], 00:17:12.056 "product_name": "Malloc disk", 00:17:12.056 "block_size": 512, 00:17:12.056 "num_blocks": 65536, 00:17:12.056 "uuid": "2a7ce9f7-9074-4219-9853-9148d7685503", 00:17:12.056 "assigned_rate_limits": { 00:17:12.056 "rw_ios_per_sec": 0, 00:17:12.056 "rw_mbytes_per_sec": 0, 00:17:12.056 "r_mbytes_per_sec": 0, 00:17:12.056 "w_mbytes_per_sec": 0 00:17:12.056 }, 00:17:12.056 "claimed": false, 00:17:12.056 "zoned": false, 00:17:12.056 "supported_io_types": { 00:17:12.056 "read": true, 00:17:12.056 "write": true, 00:17:12.056 "unmap": true, 00:17:12.056 "flush": true, 00:17:12.056 "reset": true, 00:17:12.056 "nvme_admin": false, 00:17:12.056 "nvme_io": false, 00:17:12.056 "nvme_io_md": false, 00:17:12.056 "write_zeroes": true, 00:17:12.056 "zcopy": true, 00:17:12.056 "get_zone_info": false, 00:17:12.056 "zone_management": false, 00:17:12.056 "zone_append": false, 00:17:12.056 "compare": false, 00:17:12.056 "compare_and_write": false, 00:17:12.056 "abort": true, 00:17:12.056 "seek_hole": false, 00:17:12.056 "seek_data": false, 00:17:12.056 "copy": true, 00:17:12.056 "nvme_iov_md": false 00:17:12.056 }, 00:17:12.056 "memory_domains": [ 00:17:12.056 { 00:17:12.056 "dma_device_id": "system", 00:17:12.056 "dma_device_type": 1 00:17:12.056 }, 00:17:12.056 { 00:17:12.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:12.056 "dma_device_type": 2 00:17:12.056 } 00:17:12.056 ], 00:17:12.056 "driver_specific": {} 00:17:12.056 } 00:17:12.056 ] 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n Existed_Raid 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.056 [2024-11-27 14:16:49.150168] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:12.056 [2024-11-27 14:16:49.150221] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:12.056 [2024-11-27 14:16:49.150253] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:12.056 [2024-11-27 14:16:49.152658] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.056 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.056 "name": "Existed_Raid", 00:17:12.057 "uuid": "03f4121a-f564-499d-8e89-b609843b1781", 00:17:12.057 "strip_size_kb": 64, 00:17:12.057 "state": "configuring", 00:17:12.057 "raid_level": "raid5f", 00:17:12.057 "superblock": true, 00:17:12.057 "num_base_bdevs": 3, 00:17:12.057 "num_base_bdevs_discovered": 2, 00:17:12.057 "num_base_bdevs_operational": 3, 00:17:12.057 "base_bdevs_list": [ 00:17:12.057 { 00:17:12.057 "name": "BaseBdev1", 00:17:12.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.057 "is_configured": false, 00:17:12.057 "data_offset": 0, 00:17:12.057 "data_size": 0 00:17:12.057 }, 00:17:12.057 { 00:17:12.057 "name": "BaseBdev2", 00:17:12.057 "uuid": "f0b28554-01f6-4ba5-b7a8-71e186ae9f88", 00:17:12.057 "is_configured": true, 00:17:12.057 "data_offset": 2048, 00:17:12.057 "data_size": 63488 00:17:12.057 }, 00:17:12.057 { 00:17:12.057 "name": "BaseBdev3", 00:17:12.057 "uuid": "2a7ce9f7-9074-4219-9853-9148d7685503", 00:17:12.057 "is_configured": true, 00:17:12.057 "data_offset": 2048, 00:17:12.057 "data_size": 63488 00:17:12.057 } 00:17:12.057 ] 00:17:12.057 }' 00:17:12.057 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.057 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.623 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:17:12.623 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.623 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.623 [2024-11-27 14:16:49.694421] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:12.623 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.623 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:12.623 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:12.623 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:12.623 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:12.623 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:12.623 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:12.623 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:12.623 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:12.623 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:12.623 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:12.623 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:12.623 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:12.623 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.623 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:12.623 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.623 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:12.623 "name": "Existed_Raid", 00:17:12.623 "uuid": "03f4121a-f564-499d-8e89-b609843b1781", 00:17:12.623 "strip_size_kb": 64, 00:17:12.623 "state": "configuring", 00:17:12.623 "raid_level": "raid5f", 00:17:12.623 "superblock": true, 00:17:12.623 "num_base_bdevs": 3, 00:17:12.623 "num_base_bdevs_discovered": 1, 00:17:12.623 "num_base_bdevs_operational": 3, 00:17:12.623 "base_bdevs_list": [ 00:17:12.623 { 00:17:12.623 "name": "BaseBdev1", 00:17:12.623 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:12.623 "is_configured": false, 00:17:12.623 "data_offset": 0, 00:17:12.623 "data_size": 0 00:17:12.623 }, 00:17:12.623 { 00:17:12.623 "name": null, 00:17:12.623 "uuid": "f0b28554-01f6-4ba5-b7a8-71e186ae9f88", 00:17:12.623 "is_configured": false, 00:17:12.623 "data_offset": 0, 00:17:12.623 "data_size": 63488 00:17:12.623 }, 00:17:12.623 { 00:17:12.623 "name": "BaseBdev3", 00:17:12.623 "uuid": "2a7ce9f7-9074-4219-9853-9148d7685503", 00:17:12.623 "is_configured": true, 00:17:12.623 "data_offset": 2048, 00:17:12.623 "data_size": 63488 00:17:12.623 } 00:17:12.623 ] 00:17:12.623 }' 00:17:12.623 14:16:49 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:12.623 14:16:49 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.191 [2024-11-27 14:16:50.292763] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:13.191 BaseBdev1 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.191 [ 00:17:13.191 { 00:17:13.191 "name": "BaseBdev1", 00:17:13.191 "aliases": [ 00:17:13.191 "e02e708b-1d4f-4994-a122-d3f2b56b1f99" 00:17:13.191 ], 00:17:13.191 "product_name": "Malloc disk", 00:17:13.191 "block_size": 512, 00:17:13.191 "num_blocks": 65536, 00:17:13.191 "uuid": "e02e708b-1d4f-4994-a122-d3f2b56b1f99", 00:17:13.191 "assigned_rate_limits": { 00:17:13.191 "rw_ios_per_sec": 0, 00:17:13.191 "rw_mbytes_per_sec": 0, 00:17:13.191 "r_mbytes_per_sec": 0, 00:17:13.191 "w_mbytes_per_sec": 0 00:17:13.191 }, 00:17:13.191 "claimed": true, 00:17:13.191 "claim_type": "exclusive_write", 00:17:13.191 "zoned": false, 00:17:13.191 "supported_io_types": { 00:17:13.191 "read": true, 00:17:13.191 "write": true, 00:17:13.191 "unmap": true, 00:17:13.191 "flush": true, 00:17:13.191 "reset": true, 00:17:13.191 "nvme_admin": false, 00:17:13.191 "nvme_io": false, 00:17:13.191 "nvme_io_md": false, 00:17:13.191 "write_zeroes": true, 00:17:13.191 "zcopy": true, 00:17:13.191 "get_zone_info": false, 00:17:13.191 "zone_management": false, 00:17:13.191 "zone_append": false, 00:17:13.191 "compare": false, 00:17:13.191 "compare_and_write": false, 00:17:13.191 "abort": true, 00:17:13.191 "seek_hole": false, 00:17:13.191 "seek_data": false, 00:17:13.191 "copy": true, 00:17:13.191 "nvme_iov_md": false 00:17:13.191 }, 00:17:13.191 "memory_domains": [ 00:17:13.191 { 00:17:13.191 "dma_device_id": "system", 00:17:13.191 "dma_device_type": 1 00:17:13.191 }, 00:17:13.191 { 00:17:13.191 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.191 "dma_device_type": 2 00:17:13.191 } 00:17:13.191 ], 00:17:13.191 "driver_specific": {} 00:17:13.191 } 00:17:13.191 ] 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.191 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.191 "name": "Existed_Raid", 00:17:13.191 "uuid": "03f4121a-f564-499d-8e89-b609843b1781", 00:17:13.191 "strip_size_kb": 64, 00:17:13.191 "state": "configuring", 00:17:13.191 "raid_level": "raid5f", 00:17:13.191 "superblock": true, 00:17:13.191 "num_base_bdevs": 3, 00:17:13.191 "num_base_bdevs_discovered": 2, 00:17:13.191 "num_base_bdevs_operational": 3, 00:17:13.191 "base_bdevs_list": [ 00:17:13.191 { 00:17:13.191 "name": "BaseBdev1", 00:17:13.191 "uuid": "e02e708b-1d4f-4994-a122-d3f2b56b1f99", 00:17:13.191 "is_configured": true, 00:17:13.191 "data_offset": 2048, 00:17:13.191 "data_size": 63488 00:17:13.191 }, 00:17:13.191 { 00:17:13.191 "name": null, 00:17:13.191 "uuid": "f0b28554-01f6-4ba5-b7a8-71e186ae9f88", 00:17:13.191 "is_configured": false, 00:17:13.191 "data_offset": 0, 00:17:13.191 "data_size": 63488 00:17:13.191 }, 00:17:13.191 { 00:17:13.191 "name": "BaseBdev3", 00:17:13.191 "uuid": "2a7ce9f7-9074-4219-9853-9148d7685503", 00:17:13.191 "is_configured": true, 00:17:13.191 "data_offset": 2048, 00:17:13.191 "data_size": 63488 00:17:13.191 } 00:17:13.191 ] 00:17:13.191 }' 00:17:13.192 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.192 14:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.759 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.759 14:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.759 14:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.759 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:13.759 14:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.759 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:17:13.759 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:17:13.759 14:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.759 14:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.759 [2024-11-27 14:16:50.917038] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:17:13.759 14:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.759 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:13.759 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:13.759 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:13.759 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:13.759 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:13.759 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:13.759 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:13.759 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:13.759 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:13.759 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:13.759 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.759 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:13.759 14:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.759 14:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:13.759 14:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.759 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:13.759 "name": "Existed_Raid", 00:17:13.759 "uuid": "03f4121a-f564-499d-8e89-b609843b1781", 00:17:13.759 "strip_size_kb": 64, 00:17:13.759 "state": "configuring", 00:17:13.759 "raid_level": "raid5f", 00:17:13.759 "superblock": true, 00:17:13.759 "num_base_bdevs": 3, 00:17:13.759 "num_base_bdevs_discovered": 1, 00:17:13.759 "num_base_bdevs_operational": 3, 00:17:13.759 "base_bdevs_list": [ 00:17:13.759 { 00:17:13.759 "name": "BaseBdev1", 00:17:13.759 "uuid": "e02e708b-1d4f-4994-a122-d3f2b56b1f99", 00:17:13.759 "is_configured": true, 00:17:13.759 "data_offset": 2048, 00:17:13.759 "data_size": 63488 00:17:13.759 }, 00:17:13.759 { 00:17:13.759 "name": null, 00:17:13.759 "uuid": "f0b28554-01f6-4ba5-b7a8-71e186ae9f88", 00:17:13.759 "is_configured": false, 00:17:13.759 "data_offset": 0, 00:17:13.759 "data_size": 63488 00:17:13.759 }, 00:17:13.759 { 00:17:13.759 "name": null, 00:17:13.759 "uuid": "2a7ce9f7-9074-4219-9853-9148d7685503", 00:17:13.759 "is_configured": false, 00:17:13.759 "data_offset": 0, 00:17:13.759 "data_size": 63488 00:17:13.759 } 00:17:13.759 ] 00:17:13.759 }' 00:17:13.759 14:16:50 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:13.759 14:16:50 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.327 14:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.327 14:16:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.327 14:16:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.327 14:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:14.327 14:16:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.327 14:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:17:14.327 14:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:17:14.327 14:16:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.327 14:16:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.327 [2024-11-27 14:16:51.505279] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:14.327 14:16:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.327 14:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:14.327 14:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:14.327 14:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:14.327 14:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:14.327 14:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:14.327 14:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:14.327 14:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:14.327 14:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:14.327 14:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:14.327 14:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:14.327 14:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.327 14:16:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.327 14:16:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.327 14:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.327 14:16:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.327 14:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:14.327 "name": "Existed_Raid", 00:17:14.327 "uuid": "03f4121a-f564-499d-8e89-b609843b1781", 00:17:14.327 "strip_size_kb": 64, 00:17:14.327 "state": "configuring", 00:17:14.327 "raid_level": "raid5f", 00:17:14.327 "superblock": true, 00:17:14.327 "num_base_bdevs": 3, 00:17:14.327 "num_base_bdevs_discovered": 2, 00:17:14.327 "num_base_bdevs_operational": 3, 00:17:14.327 "base_bdevs_list": [ 00:17:14.327 { 00:17:14.327 "name": "BaseBdev1", 00:17:14.327 "uuid": "e02e708b-1d4f-4994-a122-d3f2b56b1f99", 00:17:14.327 "is_configured": true, 00:17:14.327 "data_offset": 2048, 00:17:14.327 "data_size": 63488 00:17:14.327 }, 00:17:14.327 { 00:17:14.327 "name": null, 00:17:14.327 "uuid": "f0b28554-01f6-4ba5-b7a8-71e186ae9f88", 00:17:14.327 "is_configured": false, 00:17:14.327 "data_offset": 0, 00:17:14.327 "data_size": 63488 00:17:14.327 }, 00:17:14.327 { 00:17:14.327 "name": "BaseBdev3", 00:17:14.327 "uuid": "2a7ce9f7-9074-4219-9853-9148d7685503", 00:17:14.327 "is_configured": true, 00:17:14.327 "data_offset": 2048, 00:17:14.327 "data_size": 63488 00:17:14.327 } 00:17:14.327 ] 00:17:14.327 }' 00:17:14.327 14:16:51 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:14.327 14:16:51 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.894 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:14.894 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:17:14.894 14:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.894 14:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.894 14:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.894 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:17:14.894 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:17:14.894 14:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.894 14:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:14.894 [2024-11-27 14:16:52.085521] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:14.894 14:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.894 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:14.894 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:14.894 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:15.153 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:15.153 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:15.153 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:15.153 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.153 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.153 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.153 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.153 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.153 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.153 14:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.153 14:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.153 14:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.153 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.153 "name": "Existed_Raid", 00:17:15.153 "uuid": "03f4121a-f564-499d-8e89-b609843b1781", 00:17:15.153 "strip_size_kb": 64, 00:17:15.153 "state": "configuring", 00:17:15.153 "raid_level": "raid5f", 00:17:15.153 "superblock": true, 00:17:15.153 "num_base_bdevs": 3, 00:17:15.153 "num_base_bdevs_discovered": 1, 00:17:15.153 "num_base_bdevs_operational": 3, 00:17:15.153 "base_bdevs_list": [ 00:17:15.153 { 00:17:15.153 "name": null, 00:17:15.153 "uuid": "e02e708b-1d4f-4994-a122-d3f2b56b1f99", 00:17:15.153 "is_configured": false, 00:17:15.153 "data_offset": 0, 00:17:15.153 "data_size": 63488 00:17:15.153 }, 00:17:15.153 { 00:17:15.153 "name": null, 00:17:15.153 "uuid": "f0b28554-01f6-4ba5-b7a8-71e186ae9f88", 00:17:15.153 "is_configured": false, 00:17:15.153 "data_offset": 0, 00:17:15.153 "data_size": 63488 00:17:15.153 }, 00:17:15.153 { 00:17:15.153 "name": "BaseBdev3", 00:17:15.153 "uuid": "2a7ce9f7-9074-4219-9853-9148d7685503", 00:17:15.153 "is_configured": true, 00:17:15.153 "data_offset": 2048, 00:17:15.153 "data_size": 63488 00:17:15.153 } 00:17:15.153 ] 00:17:15.153 }' 00:17:15.154 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.154 14:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.721 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.721 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:17:15.721 14:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.721 14:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.721 14:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.721 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:17:15.721 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:17:15.721 14:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.721 14:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.721 [2024-11-27 14:16:52.776231] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:15.721 14:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.721 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 3 00:17:15.721 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:15.721 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:15.721 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:15.721 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:15.721 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:15.721 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:15.721 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:15.721 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:15.721 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:15.721 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:15.721 14:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.721 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.721 14:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:15.721 14:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.721 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:15.721 "name": "Existed_Raid", 00:17:15.721 "uuid": "03f4121a-f564-499d-8e89-b609843b1781", 00:17:15.721 "strip_size_kb": 64, 00:17:15.721 "state": "configuring", 00:17:15.721 "raid_level": "raid5f", 00:17:15.721 "superblock": true, 00:17:15.721 "num_base_bdevs": 3, 00:17:15.721 "num_base_bdevs_discovered": 2, 00:17:15.721 "num_base_bdevs_operational": 3, 00:17:15.721 "base_bdevs_list": [ 00:17:15.722 { 00:17:15.722 "name": null, 00:17:15.722 "uuid": "e02e708b-1d4f-4994-a122-d3f2b56b1f99", 00:17:15.722 "is_configured": false, 00:17:15.722 "data_offset": 0, 00:17:15.722 "data_size": 63488 00:17:15.722 }, 00:17:15.722 { 00:17:15.722 "name": "BaseBdev2", 00:17:15.722 "uuid": "f0b28554-01f6-4ba5-b7a8-71e186ae9f88", 00:17:15.722 "is_configured": true, 00:17:15.722 "data_offset": 2048, 00:17:15.722 "data_size": 63488 00:17:15.722 }, 00:17:15.722 { 00:17:15.722 "name": "BaseBdev3", 00:17:15.722 "uuid": "2a7ce9f7-9074-4219-9853-9148d7685503", 00:17:15.722 "is_configured": true, 00:17:15.722 "data_offset": 2048, 00:17:15.722 "data_size": 63488 00:17:15.722 } 00:17:15.722 ] 00:17:15.722 }' 00:17:15.722 14:16:52 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:15.722 14:16:52 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.290 14:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.290 14:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:17:16.290 14:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.290 14:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.290 14:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.290 14:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u e02e708b-1d4f-4994-a122-d3f2b56b1f99 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.291 [2024-11-27 14:16:53.445133] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:17:16.291 [2024-11-27 14:16:53.445442] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:16.291 [2024-11-27 14:16:53.445495] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:16.291 [2024-11-27 14:16:53.445813] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:17:16.291 NewBaseBdev 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:16.291 [2024-11-27 14:16:53.451336] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:16.291 [2024-11-27 14:16:53.451363] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:17:16.291 [2024-11-27 14:16:53.451540] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.291 [ 00:17:16.291 { 00:17:16.291 "name": "NewBaseBdev", 00:17:16.291 "aliases": [ 00:17:16.291 "e02e708b-1d4f-4994-a122-d3f2b56b1f99" 00:17:16.291 ], 00:17:16.291 "product_name": "Malloc disk", 00:17:16.291 "block_size": 512, 00:17:16.291 "num_blocks": 65536, 00:17:16.291 "uuid": "e02e708b-1d4f-4994-a122-d3f2b56b1f99", 00:17:16.291 "assigned_rate_limits": { 00:17:16.291 "rw_ios_per_sec": 0, 00:17:16.291 "rw_mbytes_per_sec": 0, 00:17:16.291 "r_mbytes_per_sec": 0, 00:17:16.291 "w_mbytes_per_sec": 0 00:17:16.291 }, 00:17:16.291 "claimed": true, 00:17:16.291 "claim_type": "exclusive_write", 00:17:16.291 "zoned": false, 00:17:16.291 "supported_io_types": { 00:17:16.291 "read": true, 00:17:16.291 "write": true, 00:17:16.291 "unmap": true, 00:17:16.291 "flush": true, 00:17:16.291 "reset": true, 00:17:16.291 "nvme_admin": false, 00:17:16.291 "nvme_io": false, 00:17:16.291 "nvme_io_md": false, 00:17:16.291 "write_zeroes": true, 00:17:16.291 "zcopy": true, 00:17:16.291 "get_zone_info": false, 00:17:16.291 "zone_management": false, 00:17:16.291 "zone_append": false, 00:17:16.291 "compare": false, 00:17:16.291 "compare_and_write": false, 00:17:16.291 "abort": true, 00:17:16.291 "seek_hole": false, 00:17:16.291 "seek_data": false, 00:17:16.291 "copy": true, 00:17:16.291 "nvme_iov_md": false 00:17:16.291 }, 00:17:16.291 "memory_domains": [ 00:17:16.291 { 00:17:16.291 "dma_device_id": "system", 00:17:16.291 "dma_device_type": 1 00:17:16.291 }, 00:17:16.291 { 00:17:16.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.291 "dma_device_type": 2 00:17:16.291 } 00:17:16.291 ], 00:17:16.291 "driver_specific": {} 00:17:16.291 } 00:17:16.291 ] 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:16.291 "name": "Existed_Raid", 00:17:16.291 "uuid": "03f4121a-f564-499d-8e89-b609843b1781", 00:17:16.291 "strip_size_kb": 64, 00:17:16.291 "state": "online", 00:17:16.291 "raid_level": "raid5f", 00:17:16.291 "superblock": true, 00:17:16.291 "num_base_bdevs": 3, 00:17:16.291 "num_base_bdevs_discovered": 3, 00:17:16.291 "num_base_bdevs_operational": 3, 00:17:16.291 "base_bdevs_list": [ 00:17:16.291 { 00:17:16.291 "name": "NewBaseBdev", 00:17:16.291 "uuid": "e02e708b-1d4f-4994-a122-d3f2b56b1f99", 00:17:16.291 "is_configured": true, 00:17:16.291 "data_offset": 2048, 00:17:16.291 "data_size": 63488 00:17:16.291 }, 00:17:16.291 { 00:17:16.291 "name": "BaseBdev2", 00:17:16.291 "uuid": "f0b28554-01f6-4ba5-b7a8-71e186ae9f88", 00:17:16.291 "is_configured": true, 00:17:16.291 "data_offset": 2048, 00:17:16.291 "data_size": 63488 00:17:16.291 }, 00:17:16.291 { 00:17:16.291 "name": "BaseBdev3", 00:17:16.291 "uuid": "2a7ce9f7-9074-4219-9853-9148d7685503", 00:17:16.291 "is_configured": true, 00:17:16.291 "data_offset": 2048, 00:17:16.291 "data_size": 63488 00:17:16.291 } 00:17:16.291 ] 00:17:16.291 }' 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:16.291 14:16:53 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.858 14:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:17:16.858 14:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:17:16.858 14:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:16.858 14:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:16.858 14:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:17:16.858 14:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:16.858 14:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:17:16.858 14:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:16.858 14:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.858 14:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:16.858 [2024-11-27 14:16:54.017540] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:16.858 14:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.858 14:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:16.858 "name": "Existed_Raid", 00:17:16.858 "aliases": [ 00:17:16.858 "03f4121a-f564-499d-8e89-b609843b1781" 00:17:16.858 ], 00:17:16.858 "product_name": "Raid Volume", 00:17:16.858 "block_size": 512, 00:17:16.858 "num_blocks": 126976, 00:17:16.858 "uuid": "03f4121a-f564-499d-8e89-b609843b1781", 00:17:16.858 "assigned_rate_limits": { 00:17:16.858 "rw_ios_per_sec": 0, 00:17:16.858 "rw_mbytes_per_sec": 0, 00:17:16.858 "r_mbytes_per_sec": 0, 00:17:16.858 "w_mbytes_per_sec": 0 00:17:16.858 }, 00:17:16.858 "claimed": false, 00:17:16.858 "zoned": false, 00:17:16.858 "supported_io_types": { 00:17:16.858 "read": true, 00:17:16.858 "write": true, 00:17:16.858 "unmap": false, 00:17:16.858 "flush": false, 00:17:16.858 "reset": true, 00:17:16.858 "nvme_admin": false, 00:17:16.858 "nvme_io": false, 00:17:16.858 "nvme_io_md": false, 00:17:16.858 "write_zeroes": true, 00:17:16.858 "zcopy": false, 00:17:16.858 "get_zone_info": false, 00:17:16.859 "zone_management": false, 00:17:16.859 "zone_append": false, 00:17:16.859 "compare": false, 00:17:16.859 "compare_and_write": false, 00:17:16.859 "abort": false, 00:17:16.859 "seek_hole": false, 00:17:16.859 "seek_data": false, 00:17:16.859 "copy": false, 00:17:16.859 "nvme_iov_md": false 00:17:16.859 }, 00:17:16.859 "driver_specific": { 00:17:16.859 "raid": { 00:17:16.859 "uuid": "03f4121a-f564-499d-8e89-b609843b1781", 00:17:16.859 "strip_size_kb": 64, 00:17:16.859 "state": "online", 00:17:16.859 "raid_level": "raid5f", 00:17:16.859 "superblock": true, 00:17:16.859 "num_base_bdevs": 3, 00:17:16.859 "num_base_bdevs_discovered": 3, 00:17:16.859 "num_base_bdevs_operational": 3, 00:17:16.859 "base_bdevs_list": [ 00:17:16.859 { 00:17:16.859 "name": "NewBaseBdev", 00:17:16.859 "uuid": "e02e708b-1d4f-4994-a122-d3f2b56b1f99", 00:17:16.859 "is_configured": true, 00:17:16.859 "data_offset": 2048, 00:17:16.859 "data_size": 63488 00:17:16.859 }, 00:17:16.859 { 00:17:16.859 "name": "BaseBdev2", 00:17:16.859 "uuid": "f0b28554-01f6-4ba5-b7a8-71e186ae9f88", 00:17:16.859 "is_configured": true, 00:17:16.859 "data_offset": 2048, 00:17:16.859 "data_size": 63488 00:17:16.859 }, 00:17:16.859 { 00:17:16.859 "name": "BaseBdev3", 00:17:16.859 "uuid": "2a7ce9f7-9074-4219-9853-9148d7685503", 00:17:16.859 "is_configured": true, 00:17:16.859 "data_offset": 2048, 00:17:16.859 "data_size": 63488 00:17:16.859 } 00:17:16.859 ] 00:17:16.859 } 00:17:16.859 } 00:17:16.859 }' 00:17:16.859 14:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:16.859 14:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:17:16.859 BaseBdev2 00:17:16.859 BaseBdev3' 00:17:16.859 14:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:17.117 14:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:17.117 14:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:17.117 14:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:17:17.117 14:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.117 14:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.117 14:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:17.117 14:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.117 14:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:17.117 14:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:17.117 14:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:17.117 14:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:17.117 14:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:17:17.117 14:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.117 14:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.117 14:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.117 14:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:17.117 14:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:17.117 14:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:17.117 14:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:17:17.117 14:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.117 14:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.117 14:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:17.117 14:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.117 14:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:17.117 14:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:17.117 14:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:17:17.117 14:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.117 14:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:17.117 [2024-11-27 14:16:54.329380] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:17.117 [2024-11-27 14:16:54.329415] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:17.118 [2024-11-27 14:16:54.329505] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:17.118 [2024-11-27 14:16:54.329872] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:17.118 [2024-11-27 14:16:54.329904] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:17:17.118 14:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.118 14:16:54 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 80766 00:17:17.118 14:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 80766 ']' 00:17:17.118 14:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 80766 00:17:17.118 14:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:17:17.118 14:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:17.118 14:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 80766 00:17:17.118 14:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:17.118 14:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:17.118 killing process with pid 80766 00:17:17.118 14:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 80766' 00:17:17.118 14:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 80766 00:17:17.118 [2024-11-27 14:16:54.363466] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:17.118 14:16:54 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 80766 00:17:17.376 [2024-11-27 14:16:54.626665] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:18.753 14:16:55 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:17:18.753 00:17:18.753 real 0m11.915s 00:17:18.753 user 0m19.764s 00:17:18.753 sys 0m1.666s 00:17:18.753 14:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:18.753 14:16:55 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:18.753 ************************************ 00:17:18.753 END TEST raid5f_state_function_test_sb 00:17:18.753 ************************************ 00:17:18.753 14:16:55 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 3 00:17:18.753 14:16:55 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:18.753 14:16:55 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:18.753 14:16:55 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:18.753 ************************************ 00:17:18.753 START TEST raid5f_superblock_test 00:17:18.753 ************************************ 00:17:18.753 14:16:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 3 00:17:18.753 14:16:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:17:18.753 14:16:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=3 00:17:18.753 14:16:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:17:18.753 14:16:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:17:18.753 14:16:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:17:18.753 14:16:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:17:18.753 14:16:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:17:18.753 14:16:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:17:18.753 14:16:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:17:18.753 14:16:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:17:18.753 14:16:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:17:18.753 14:16:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:17:18.753 14:16:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:17:18.753 14:16:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:17:18.753 14:16:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:17:18.753 14:16:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:17:18.753 14:16:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=81396 00:17:18.753 14:16:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 81396 00:17:18.753 14:16:55 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:17:18.753 14:16:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 81396 ']' 00:17:18.753 14:16:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:18.753 14:16:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:18.753 14:16:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:18.753 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:18.753 14:16:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:18.753 14:16:55 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:18.753 [2024-11-27 14:16:55.829398] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:17:18.753 [2024-11-27 14:16:55.829573] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81396 ] 00:17:18.753 [2024-11-27 14:16:56.012742] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.012 [2024-11-27 14:16:56.141688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.270 [2024-11-27 14:16:56.344752] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:19.270 [2024-11-27 14:16:56.344834] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.837 malloc1 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.837 [2024-11-27 14:16:56.880514] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:19.837 [2024-11-27 14:16:56.880585] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.837 [2024-11-27 14:16:56.880617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:19.837 [2024-11-27 14:16:56.880633] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.837 [2024-11-27 14:16:56.883390] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.837 [2024-11-27 14:16:56.883463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:19.837 pt1 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.837 malloc2 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.837 [2024-11-27 14:16:56.936619] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:19.837 [2024-11-27 14:16:56.936686] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.837 [2024-11-27 14:16:56.936723] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:19.837 [2024-11-27 14:16:56.936738] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.837 [2024-11-27 14:16:56.939491] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.837 [2024-11-27 14:16:56.939548] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:19.837 pt2 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.837 malloc3 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.837 14:16:56 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.837 [2024-11-27 14:16:57.004501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:19.837 [2024-11-27 14:16:57.004567] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:19.837 [2024-11-27 14:16:57.004601] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:19.837 [2024-11-27 14:16:57.004616] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:19.837 [2024-11-27 14:16:57.007323] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:19.837 [2024-11-27 14:16:57.007367] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:19.837 pt3 00:17:19.837 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.837 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:17:19.837 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:17:19.837 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3'\''' -n raid_bdev1 -s 00:17:19.837 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.838 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.838 [2024-11-27 14:16:57.016554] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:19.838 [2024-11-27 14:16:57.018963] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:19.838 [2024-11-27 14:16:57.019065] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:19.838 [2024-11-27 14:16:57.019284] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:19.838 [2024-11-27 14:16:57.019322] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:19.838 [2024-11-27 14:16:57.019622] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:17:19.838 [2024-11-27 14:16:57.024812] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:19.838 [2024-11-27 14:16:57.024840] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:19.838 [2024-11-27 14:16:57.025082] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:19.838 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.838 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:19.838 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:19.838 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:19.838 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:19.838 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:19.838 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:19.838 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:19.838 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:19.838 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:19.838 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:19.838 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:19.838 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:19.838 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.838 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:19.838 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.838 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:19.838 "name": "raid_bdev1", 00:17:19.838 "uuid": "412cfcc9-4603-4668-a255-d69bd0a33e68", 00:17:19.838 "strip_size_kb": 64, 00:17:19.838 "state": "online", 00:17:19.838 "raid_level": "raid5f", 00:17:19.838 "superblock": true, 00:17:19.838 "num_base_bdevs": 3, 00:17:19.838 "num_base_bdevs_discovered": 3, 00:17:19.838 "num_base_bdevs_operational": 3, 00:17:19.838 "base_bdevs_list": [ 00:17:19.838 { 00:17:19.838 "name": "pt1", 00:17:19.838 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:19.838 "is_configured": true, 00:17:19.838 "data_offset": 2048, 00:17:19.838 "data_size": 63488 00:17:19.838 }, 00:17:19.838 { 00:17:19.838 "name": "pt2", 00:17:19.838 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:19.838 "is_configured": true, 00:17:19.838 "data_offset": 2048, 00:17:19.838 "data_size": 63488 00:17:19.838 }, 00:17:19.838 { 00:17:19.838 "name": "pt3", 00:17:19.838 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:19.838 "is_configured": true, 00:17:19.838 "data_offset": 2048, 00:17:19.838 "data_size": 63488 00:17:19.838 } 00:17:19.838 ] 00:17:19.838 }' 00:17:19.838 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:19.838 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.406 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:17:20.406 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:20.406 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:20.406 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:20.406 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:20.406 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:20.406 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:20.406 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:20.406 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.406 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.406 [2024-11-27 14:16:57.567139] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:20.406 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.406 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:20.406 "name": "raid_bdev1", 00:17:20.406 "aliases": [ 00:17:20.406 "412cfcc9-4603-4668-a255-d69bd0a33e68" 00:17:20.406 ], 00:17:20.406 "product_name": "Raid Volume", 00:17:20.406 "block_size": 512, 00:17:20.406 "num_blocks": 126976, 00:17:20.406 "uuid": "412cfcc9-4603-4668-a255-d69bd0a33e68", 00:17:20.406 "assigned_rate_limits": { 00:17:20.406 "rw_ios_per_sec": 0, 00:17:20.406 "rw_mbytes_per_sec": 0, 00:17:20.406 "r_mbytes_per_sec": 0, 00:17:20.406 "w_mbytes_per_sec": 0 00:17:20.406 }, 00:17:20.406 "claimed": false, 00:17:20.406 "zoned": false, 00:17:20.406 "supported_io_types": { 00:17:20.406 "read": true, 00:17:20.406 "write": true, 00:17:20.406 "unmap": false, 00:17:20.406 "flush": false, 00:17:20.406 "reset": true, 00:17:20.406 "nvme_admin": false, 00:17:20.406 "nvme_io": false, 00:17:20.406 "nvme_io_md": false, 00:17:20.406 "write_zeroes": true, 00:17:20.406 "zcopy": false, 00:17:20.406 "get_zone_info": false, 00:17:20.406 "zone_management": false, 00:17:20.406 "zone_append": false, 00:17:20.406 "compare": false, 00:17:20.406 "compare_and_write": false, 00:17:20.406 "abort": false, 00:17:20.406 "seek_hole": false, 00:17:20.406 "seek_data": false, 00:17:20.406 "copy": false, 00:17:20.406 "nvme_iov_md": false 00:17:20.406 }, 00:17:20.406 "driver_specific": { 00:17:20.406 "raid": { 00:17:20.406 "uuid": "412cfcc9-4603-4668-a255-d69bd0a33e68", 00:17:20.406 "strip_size_kb": 64, 00:17:20.406 "state": "online", 00:17:20.406 "raid_level": "raid5f", 00:17:20.406 "superblock": true, 00:17:20.406 "num_base_bdevs": 3, 00:17:20.406 "num_base_bdevs_discovered": 3, 00:17:20.406 "num_base_bdevs_operational": 3, 00:17:20.406 "base_bdevs_list": [ 00:17:20.406 { 00:17:20.406 "name": "pt1", 00:17:20.406 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:20.406 "is_configured": true, 00:17:20.406 "data_offset": 2048, 00:17:20.406 "data_size": 63488 00:17:20.406 }, 00:17:20.406 { 00:17:20.406 "name": "pt2", 00:17:20.406 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:20.406 "is_configured": true, 00:17:20.406 "data_offset": 2048, 00:17:20.406 "data_size": 63488 00:17:20.406 }, 00:17:20.406 { 00:17:20.406 "name": "pt3", 00:17:20.406 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:20.406 "is_configured": true, 00:17:20.406 "data_offset": 2048, 00:17:20.406 "data_size": 63488 00:17:20.406 } 00:17:20.406 ] 00:17:20.406 } 00:17:20.406 } 00:17:20.406 }' 00:17:20.406 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:20.406 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:20.406 pt2 00:17:20.406 pt3' 00:17:20.406 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:20.664 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:20.664 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:20.664 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:20.664 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:20.664 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.664 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.664 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.664 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:20.664 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:20.664 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:20.664 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:20.664 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:20.664 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.664 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.664 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.664 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:20.664 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:20.664 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:20.664 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:20.664 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:20.664 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.664 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.664 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.664 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:20.664 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:20.664 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:20.664 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:17:20.664 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.664 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.664 [2024-11-27 14:16:57.855141] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:20.664 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.665 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=412cfcc9-4603-4668-a255-d69bd0a33e68 00:17:20.665 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z 412cfcc9-4603-4668-a255-d69bd0a33e68 ']' 00:17:20.665 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:20.665 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.665 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.665 [2024-11-27 14:16:57.902941] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:20.665 [2024-11-27 14:16:57.902983] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:20.665 [2024-11-27 14:16:57.903074] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:20.665 [2024-11-27 14:16:57.903172] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:20.665 [2024-11-27 14:16:57.903189] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:20.665 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.665 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.665 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:17:20.665 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.665 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.665 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.922 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:17:20.922 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:17:20.922 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:20.922 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:17:20.923 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.923 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.923 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.923 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:20.923 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:17:20.923 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.923 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.923 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.923 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:17:20.923 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:17:20.923 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.923 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.923 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.923 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:17:20.923 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.923 14:16:57 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.923 14:16:57 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3'\''' -n raid_bdev1 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.923 [2024-11-27 14:16:58.043048] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:20.923 [2024-11-27 14:16:58.045577] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:20.923 [2024-11-27 14:16:58.045676] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:17:20.923 [2024-11-27 14:16:58.045753] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:20.923 [2024-11-27 14:16:58.045835] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:20.923 [2024-11-27 14:16:58.045871] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:17:20.923 [2024-11-27 14:16:58.045898] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:20.923 [2024-11-27 14:16:58.045911] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:17:20.923 request: 00:17:20.923 { 00:17:20.923 "name": "raid_bdev1", 00:17:20.923 "raid_level": "raid5f", 00:17:20.923 "base_bdevs": [ 00:17:20.923 "malloc1", 00:17:20.923 "malloc2", 00:17:20.923 "malloc3" 00:17:20.923 ], 00:17:20.923 "strip_size_kb": 64, 00:17:20.923 "superblock": false, 00:17:20.923 "method": "bdev_raid_create", 00:17:20.923 "req_id": 1 00:17:20.923 } 00:17:20.923 Got JSON-RPC error response 00:17:20.923 response: 00:17:20.923 { 00:17:20.923 "code": -17, 00:17:20.923 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:20.923 } 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.923 [2024-11-27 14:16:58.110987] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:20.923 [2024-11-27 14:16:58.111178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.923 [2024-11-27 14:16:58.111254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:17:20.923 [2024-11-27 14:16:58.111403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.923 [2024-11-27 14:16:58.114312] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.923 [2024-11-27 14:16:58.114355] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:20.923 [2024-11-27 14:16:58.114461] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:20.923 [2024-11-27 14:16:58.114527] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:20.923 pt1 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:20.923 "name": "raid_bdev1", 00:17:20.923 "uuid": "412cfcc9-4603-4668-a255-d69bd0a33e68", 00:17:20.923 "strip_size_kb": 64, 00:17:20.923 "state": "configuring", 00:17:20.923 "raid_level": "raid5f", 00:17:20.923 "superblock": true, 00:17:20.923 "num_base_bdevs": 3, 00:17:20.923 "num_base_bdevs_discovered": 1, 00:17:20.923 "num_base_bdevs_operational": 3, 00:17:20.923 "base_bdevs_list": [ 00:17:20.923 { 00:17:20.923 "name": "pt1", 00:17:20.923 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:20.923 "is_configured": true, 00:17:20.923 "data_offset": 2048, 00:17:20.923 "data_size": 63488 00:17:20.923 }, 00:17:20.923 { 00:17:20.923 "name": null, 00:17:20.923 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:20.923 "is_configured": false, 00:17:20.923 "data_offset": 2048, 00:17:20.923 "data_size": 63488 00:17:20.923 }, 00:17:20.923 { 00:17:20.923 "name": null, 00:17:20.923 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:20.923 "is_configured": false, 00:17:20.923 "data_offset": 2048, 00:17:20.923 "data_size": 63488 00:17:20.923 } 00:17:20.923 ] 00:17:20.923 }' 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:20.923 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.501 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 3 -gt 2 ']' 00:17:21.501 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:21.501 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.501 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.501 [2024-11-27 14:16:58.635147] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:21.501 [2024-11-27 14:16:58.635223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:21.501 [2024-11-27 14:16:58.635257] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:21.501 [2024-11-27 14:16:58.635272] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:21.501 [2024-11-27 14:16:58.635838] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:21.501 [2024-11-27 14:16:58.635876] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:21.501 [2024-11-27 14:16:58.635982] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:21.501 [2024-11-27 14:16:58.636021] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:21.501 pt2 00:17:21.501 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.501 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:17:21.501 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.501 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.501 [2024-11-27 14:16:58.643126] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:17:21.501 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.501 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:17:21.501 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:21.501 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:21.501 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:21.501 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:21.501 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:21.501 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:21.501 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:21.501 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:21.501 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:21.501 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:21.501 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.501 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:21.501 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.501 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.501 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:21.501 "name": "raid_bdev1", 00:17:21.501 "uuid": "412cfcc9-4603-4668-a255-d69bd0a33e68", 00:17:21.501 "strip_size_kb": 64, 00:17:21.501 "state": "configuring", 00:17:21.501 "raid_level": "raid5f", 00:17:21.501 "superblock": true, 00:17:21.501 "num_base_bdevs": 3, 00:17:21.501 "num_base_bdevs_discovered": 1, 00:17:21.501 "num_base_bdevs_operational": 3, 00:17:21.501 "base_bdevs_list": [ 00:17:21.501 { 00:17:21.501 "name": "pt1", 00:17:21.501 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:21.501 "is_configured": true, 00:17:21.501 "data_offset": 2048, 00:17:21.501 "data_size": 63488 00:17:21.501 }, 00:17:21.501 { 00:17:21.501 "name": null, 00:17:21.501 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:21.501 "is_configured": false, 00:17:21.501 "data_offset": 0, 00:17:21.501 "data_size": 63488 00:17:21.501 }, 00:17:21.501 { 00:17:21.501 "name": null, 00:17:21.501 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:21.501 "is_configured": false, 00:17:21.502 "data_offset": 2048, 00:17:21.502 "data_size": 63488 00:17:21.502 } 00:17:21.502 ] 00:17:21.502 }' 00:17:21.502 14:16:58 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:21.502 14:16:58 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.087 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:17:22.087 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:22.087 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:22.087 14:16:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.087 14:16:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.087 [2024-11-27 14:16:59.183266] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:22.087 [2024-11-27 14:16:59.183344] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.087 [2024-11-27 14:16:59.183371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:17:22.087 [2024-11-27 14:16:59.183388] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.087 [2024-11-27 14:16:59.183987] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.087 [2024-11-27 14:16:59.184019] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:22.087 [2024-11-27 14:16:59.184118] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:22.087 [2024-11-27 14:16:59.184154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:22.087 pt2 00:17:22.087 14:16:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.087 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:22.087 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:22.087 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:22.087 14:16:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.087 14:16:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.087 [2024-11-27 14:16:59.195256] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:22.087 [2024-11-27 14:16:59.195310] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:22.087 [2024-11-27 14:16:59.195333] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:17:22.087 [2024-11-27 14:16:59.195349] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:22.087 [2024-11-27 14:16:59.195828] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:22.088 [2024-11-27 14:16:59.195867] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:22.088 [2024-11-27 14:16:59.195948] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:22.088 [2024-11-27 14:16:59.195980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:22.088 [2024-11-27 14:16:59.196141] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:17:22.088 [2024-11-27 14:16:59.196171] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:22.088 [2024-11-27 14:16:59.196473] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:22.088 [2024-11-27 14:16:59.201300] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:17:22.088 [2024-11-27 14:16:59.201330] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:17:22.088 [2024-11-27 14:16:59.201549] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:22.088 pt3 00:17:22.088 14:16:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.088 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:17:22.088 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:17:22.088 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:22.088 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.088 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.088 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:22.088 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:22.088 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:22.089 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.089 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.089 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.089 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.089 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.089 14:16:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.089 14:16:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.089 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.089 14:16:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.089 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.089 "name": "raid_bdev1", 00:17:22.089 "uuid": "412cfcc9-4603-4668-a255-d69bd0a33e68", 00:17:22.089 "strip_size_kb": 64, 00:17:22.089 "state": "online", 00:17:22.089 "raid_level": "raid5f", 00:17:22.089 "superblock": true, 00:17:22.089 "num_base_bdevs": 3, 00:17:22.089 "num_base_bdevs_discovered": 3, 00:17:22.089 "num_base_bdevs_operational": 3, 00:17:22.089 "base_bdevs_list": [ 00:17:22.089 { 00:17:22.089 "name": "pt1", 00:17:22.089 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:22.089 "is_configured": true, 00:17:22.089 "data_offset": 2048, 00:17:22.089 "data_size": 63488 00:17:22.089 }, 00:17:22.089 { 00:17:22.089 "name": "pt2", 00:17:22.089 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:22.089 "is_configured": true, 00:17:22.089 "data_offset": 2048, 00:17:22.089 "data_size": 63488 00:17:22.089 }, 00:17:22.089 { 00:17:22.089 "name": "pt3", 00:17:22.089 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:22.089 "is_configured": true, 00:17:22.089 "data_offset": 2048, 00:17:22.089 "data_size": 63488 00:17:22.089 } 00:17:22.089 ] 00:17:22.089 }' 00:17:22.090 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.090 14:16:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.662 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:17:22.662 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:17:22.662 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:17:22.662 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:17:22.662 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:17:22.662 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:17:22.662 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:22.662 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:17:22.662 14:16:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.662 14:16:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.662 [2024-11-27 14:16:59.699503] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:22.662 14:16:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.662 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:22.662 "name": "raid_bdev1", 00:17:22.662 "aliases": [ 00:17:22.663 "412cfcc9-4603-4668-a255-d69bd0a33e68" 00:17:22.663 ], 00:17:22.663 "product_name": "Raid Volume", 00:17:22.663 "block_size": 512, 00:17:22.663 "num_blocks": 126976, 00:17:22.663 "uuid": "412cfcc9-4603-4668-a255-d69bd0a33e68", 00:17:22.663 "assigned_rate_limits": { 00:17:22.663 "rw_ios_per_sec": 0, 00:17:22.663 "rw_mbytes_per_sec": 0, 00:17:22.663 "r_mbytes_per_sec": 0, 00:17:22.663 "w_mbytes_per_sec": 0 00:17:22.663 }, 00:17:22.663 "claimed": false, 00:17:22.663 "zoned": false, 00:17:22.663 "supported_io_types": { 00:17:22.663 "read": true, 00:17:22.663 "write": true, 00:17:22.663 "unmap": false, 00:17:22.663 "flush": false, 00:17:22.663 "reset": true, 00:17:22.663 "nvme_admin": false, 00:17:22.663 "nvme_io": false, 00:17:22.663 "nvme_io_md": false, 00:17:22.663 "write_zeroes": true, 00:17:22.663 "zcopy": false, 00:17:22.663 "get_zone_info": false, 00:17:22.663 "zone_management": false, 00:17:22.663 "zone_append": false, 00:17:22.663 "compare": false, 00:17:22.663 "compare_and_write": false, 00:17:22.663 "abort": false, 00:17:22.663 "seek_hole": false, 00:17:22.663 "seek_data": false, 00:17:22.663 "copy": false, 00:17:22.663 "nvme_iov_md": false 00:17:22.663 }, 00:17:22.663 "driver_specific": { 00:17:22.663 "raid": { 00:17:22.663 "uuid": "412cfcc9-4603-4668-a255-d69bd0a33e68", 00:17:22.663 "strip_size_kb": 64, 00:17:22.663 "state": "online", 00:17:22.663 "raid_level": "raid5f", 00:17:22.663 "superblock": true, 00:17:22.663 "num_base_bdevs": 3, 00:17:22.663 "num_base_bdevs_discovered": 3, 00:17:22.663 "num_base_bdevs_operational": 3, 00:17:22.663 "base_bdevs_list": [ 00:17:22.663 { 00:17:22.663 "name": "pt1", 00:17:22.663 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:22.663 "is_configured": true, 00:17:22.663 "data_offset": 2048, 00:17:22.663 "data_size": 63488 00:17:22.663 }, 00:17:22.663 { 00:17:22.663 "name": "pt2", 00:17:22.663 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:22.663 "is_configured": true, 00:17:22.663 "data_offset": 2048, 00:17:22.663 "data_size": 63488 00:17:22.663 }, 00:17:22.663 { 00:17:22.663 "name": "pt3", 00:17:22.663 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:22.663 "is_configured": true, 00:17:22.663 "data_offset": 2048, 00:17:22.663 "data_size": 63488 00:17:22.663 } 00:17:22.663 ] 00:17:22.663 } 00:17:22.663 } 00:17:22.663 }' 00:17:22.663 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:22.663 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:17:22.663 pt2 00:17:22.663 pt3' 00:17:22.663 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:22.663 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:17:22.663 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:22.663 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:17:22.663 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:22.663 14:16:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.663 14:16:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.663 14:16:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.663 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:22.663 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:22.663 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:22.663 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:17:22.663 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:22.663 14:16:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.663 14:16:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.663 14:16:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.922 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:22.922 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:22.922 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:17:22.922 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:17:22.922 14:16:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.922 14:16:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.922 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:17:22.922 14:16:59 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.922 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:17:22.922 14:16:59 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:17:22.922 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:22.922 14:17:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.922 14:17:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.922 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:17:22.922 [2024-11-27 14:17:00.007560] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:22.922 14:17:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.922 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' 412cfcc9-4603-4668-a255-d69bd0a33e68 '!=' 412cfcc9-4603-4668-a255-d69bd0a33e68 ']' 00:17:22.922 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:17:22.922 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:17:22.922 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:17:22.922 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:17:22.922 14:17:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.922 14:17:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.922 [2024-11-27 14:17:00.059378] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:22.922 14:17:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.922 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:22.922 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:22.922 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:22.922 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:22.922 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:22.922 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:22.922 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:22.922 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:22.922 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:22.922 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:22.922 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:22.922 14:17:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.922 14:17:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:22.922 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:22.922 14:17:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.922 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:22.922 "name": "raid_bdev1", 00:17:22.922 "uuid": "412cfcc9-4603-4668-a255-d69bd0a33e68", 00:17:22.922 "strip_size_kb": 64, 00:17:22.922 "state": "online", 00:17:22.922 "raid_level": "raid5f", 00:17:22.922 "superblock": true, 00:17:22.922 "num_base_bdevs": 3, 00:17:22.922 "num_base_bdevs_discovered": 2, 00:17:22.922 "num_base_bdevs_operational": 2, 00:17:22.922 "base_bdevs_list": [ 00:17:22.922 { 00:17:22.922 "name": null, 00:17:22.922 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:22.922 "is_configured": false, 00:17:22.922 "data_offset": 0, 00:17:22.922 "data_size": 63488 00:17:22.922 }, 00:17:22.922 { 00:17:22.922 "name": "pt2", 00:17:22.922 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:22.922 "is_configured": true, 00:17:22.922 "data_offset": 2048, 00:17:22.922 "data_size": 63488 00:17:22.922 }, 00:17:22.922 { 00:17:22.922 "name": "pt3", 00:17:22.922 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:22.922 "is_configured": true, 00:17:22.922 "data_offset": 2048, 00:17:22.922 "data_size": 63488 00:17:22.922 } 00:17:22.922 ] 00:17:22.922 }' 00:17:22.922 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:22.922 14:17:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.491 [2024-11-27 14:17:00.579465] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:23.491 [2024-11-27 14:17:00.579502] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:23.491 [2024-11-27 14:17:00.579598] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:23.491 [2024-11-27 14:17:00.579676] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:23.491 [2024-11-27 14:17:00.579700] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.491 [2024-11-27 14:17:00.679467] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:23.491 [2024-11-27 14:17:00.679531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:23.491 [2024-11-27 14:17:00.679557] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a580 00:17:23.491 [2024-11-27 14:17:00.679574] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:23.491 [2024-11-27 14:17:00.682486] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:23.491 [2024-11-27 14:17:00.682531] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:23.491 [2024-11-27 14:17:00.682627] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:23.491 [2024-11-27 14:17:00.682703] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:23.491 pt2 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.491 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:23.491 "name": "raid_bdev1", 00:17:23.491 "uuid": "412cfcc9-4603-4668-a255-d69bd0a33e68", 00:17:23.491 "strip_size_kb": 64, 00:17:23.491 "state": "configuring", 00:17:23.491 "raid_level": "raid5f", 00:17:23.491 "superblock": true, 00:17:23.491 "num_base_bdevs": 3, 00:17:23.492 "num_base_bdevs_discovered": 1, 00:17:23.492 "num_base_bdevs_operational": 2, 00:17:23.492 "base_bdevs_list": [ 00:17:23.492 { 00:17:23.492 "name": null, 00:17:23.492 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:23.492 "is_configured": false, 00:17:23.492 "data_offset": 2048, 00:17:23.492 "data_size": 63488 00:17:23.492 }, 00:17:23.492 { 00:17:23.492 "name": "pt2", 00:17:23.492 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:23.492 "is_configured": true, 00:17:23.492 "data_offset": 2048, 00:17:23.492 "data_size": 63488 00:17:23.492 }, 00:17:23.492 { 00:17:23.492 "name": null, 00:17:23.492 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:23.492 "is_configured": false, 00:17:23.492 "data_offset": 2048, 00:17:23.492 "data_size": 63488 00:17:23.492 } 00:17:23.492 ] 00:17:23.492 }' 00:17:23.492 14:17:00 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:23.492 14:17:00 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.060 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:17:24.060 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:17:24.060 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=2 00:17:24.060 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:24.060 14:17:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.060 14:17:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.060 [2024-11-27 14:17:01.223610] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:24.060 [2024-11-27 14:17:01.223699] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.060 [2024-11-27 14:17:01.223730] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:17:24.060 [2024-11-27 14:17:01.223748] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.060 [2024-11-27 14:17:01.224373] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.060 [2024-11-27 14:17:01.224409] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:24.060 [2024-11-27 14:17:01.224524] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:24.060 [2024-11-27 14:17:01.224564] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:24.060 [2024-11-27 14:17:01.224721] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:17:24.060 [2024-11-27 14:17:01.224748] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:24.060 [2024-11-27 14:17:01.225074] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:17:24.060 [2024-11-27 14:17:01.230024] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:17:24.060 [2024-11-27 14:17:01.230053] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:17:24.060 [2024-11-27 14:17:01.230430] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:24.060 pt3 00:17:24.060 14:17:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.060 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:24.060 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.060 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:24.060 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:24.060 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.060 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:24.060 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.060 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.060 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.060 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.060 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.060 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.060 14:17:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.060 14:17:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.060 14:17:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.060 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.060 "name": "raid_bdev1", 00:17:24.060 "uuid": "412cfcc9-4603-4668-a255-d69bd0a33e68", 00:17:24.060 "strip_size_kb": 64, 00:17:24.060 "state": "online", 00:17:24.060 "raid_level": "raid5f", 00:17:24.060 "superblock": true, 00:17:24.060 "num_base_bdevs": 3, 00:17:24.060 "num_base_bdevs_discovered": 2, 00:17:24.060 "num_base_bdevs_operational": 2, 00:17:24.060 "base_bdevs_list": [ 00:17:24.060 { 00:17:24.060 "name": null, 00:17:24.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.060 "is_configured": false, 00:17:24.060 "data_offset": 2048, 00:17:24.060 "data_size": 63488 00:17:24.060 }, 00:17:24.060 { 00:17:24.060 "name": "pt2", 00:17:24.060 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:24.060 "is_configured": true, 00:17:24.060 "data_offset": 2048, 00:17:24.060 "data_size": 63488 00:17:24.060 }, 00:17:24.060 { 00:17:24.060 "name": "pt3", 00:17:24.060 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:24.060 "is_configured": true, 00:17:24.060 "data_offset": 2048, 00:17:24.060 "data_size": 63488 00:17:24.060 } 00:17:24.060 ] 00:17:24.060 }' 00:17:24.060 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.060 14:17:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.628 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:24.628 14:17:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.628 14:17:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.628 [2024-11-27 14:17:01.756071] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:24.628 [2024-11-27 14:17:01.756113] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:24.628 [2024-11-27 14:17:01.756204] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:24.628 [2024-11-27 14:17:01.756289] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:24.628 [2024-11-27 14:17:01.756305] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:17:24.628 14:17:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.628 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.628 14:17:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.628 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:17:24.628 14:17:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.628 14:17:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.628 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:17:24.628 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:17:24.628 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 3 -gt 2 ']' 00:17:24.628 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=2 00:17:24.628 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt3 00:17:24.628 14:17:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.628 14:17:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.628 14:17:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.628 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:24.628 14:17:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.628 14:17:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.628 [2024-11-27 14:17:01.844109] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:24.628 [2024-11-27 14:17:01.844175] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.628 [2024-11-27 14:17:01.844204] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:24.628 [2024-11-27 14:17:01.844219] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.628 [2024-11-27 14:17:01.847108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.628 [2024-11-27 14:17:01.847147] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:24.628 [2024-11-27 14:17:01.847247] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:24.628 [2024-11-27 14:17:01.847306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:24.628 [2024-11-27 14:17:01.847480] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:24.628 [2024-11-27 14:17:01.847499] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:24.628 [2024-11-27 14:17:01.847521] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:17:24.628 [2024-11-27 14:17:01.847587] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:24.628 pt1 00:17:24.628 14:17:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.628 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 3 -gt 2 ']' 00:17:24.628 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 2 00:17:24.629 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:24.629 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:17:24.629 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:24.629 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:24.629 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:24.629 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:24.629 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:24.629 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:24.629 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:24.629 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:24.629 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:24.629 14:17:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.629 14:17:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:24.629 14:17:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.889 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:24.889 "name": "raid_bdev1", 00:17:24.889 "uuid": "412cfcc9-4603-4668-a255-d69bd0a33e68", 00:17:24.889 "strip_size_kb": 64, 00:17:24.889 "state": "configuring", 00:17:24.889 "raid_level": "raid5f", 00:17:24.889 "superblock": true, 00:17:24.889 "num_base_bdevs": 3, 00:17:24.889 "num_base_bdevs_discovered": 1, 00:17:24.889 "num_base_bdevs_operational": 2, 00:17:24.889 "base_bdevs_list": [ 00:17:24.889 { 00:17:24.889 "name": null, 00:17:24.889 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:24.889 "is_configured": false, 00:17:24.889 "data_offset": 2048, 00:17:24.889 "data_size": 63488 00:17:24.889 }, 00:17:24.889 { 00:17:24.889 "name": "pt2", 00:17:24.889 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:24.889 "is_configured": true, 00:17:24.889 "data_offset": 2048, 00:17:24.889 "data_size": 63488 00:17:24.889 }, 00:17:24.889 { 00:17:24.889 "name": null, 00:17:24.889 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:24.889 "is_configured": false, 00:17:24.889 "data_offset": 2048, 00:17:24.889 "data_size": 63488 00:17:24.889 } 00:17:24.889 ] 00:17:24.889 }' 00:17:24.889 14:17:01 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:24.889 14:17:01 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.148 14:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:17:25.148 14:17:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.148 14:17:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.148 14:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:25.148 14:17:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.148 14:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:17:25.148 14:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:17:25.148 14:17:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.148 14:17:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.148 [2024-11-27 14:17:02.424309] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:17:25.148 [2024-11-27 14:17:02.424383] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.148 [2024-11-27 14:17:02.424425] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:17:25.148 [2024-11-27 14:17:02.424449] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.407 [2024-11-27 14:17:02.425083] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.407 [2024-11-27 14:17:02.425111] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:17:25.407 [2024-11-27 14:17:02.425213] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:17:25.407 [2024-11-27 14:17:02.425247] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:17:25.407 [2024-11-27 14:17:02.425438] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:17:25.407 [2024-11-27 14:17:02.425461] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:25.407 [2024-11-27 14:17:02.425800] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:25.407 [2024-11-27 14:17:02.430892] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:17:25.407 [2024-11-27 14:17:02.430930] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:17:25.407 [2024-11-27 14:17:02.431220] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.407 pt3 00:17:25.407 14:17:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.407 14:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:25.407 14:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:25.407 14:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:25.407 14:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:25.407 14:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:25.407 14:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:25.407 14:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:25.407 14:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:25.407 14:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:25.407 14:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:25.407 14:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:25.407 14:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.407 14:17:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.407 14:17:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.407 14:17:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.407 14:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:25.407 "name": "raid_bdev1", 00:17:25.407 "uuid": "412cfcc9-4603-4668-a255-d69bd0a33e68", 00:17:25.407 "strip_size_kb": 64, 00:17:25.407 "state": "online", 00:17:25.407 "raid_level": "raid5f", 00:17:25.407 "superblock": true, 00:17:25.407 "num_base_bdevs": 3, 00:17:25.407 "num_base_bdevs_discovered": 2, 00:17:25.407 "num_base_bdevs_operational": 2, 00:17:25.407 "base_bdevs_list": [ 00:17:25.407 { 00:17:25.407 "name": null, 00:17:25.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:25.407 "is_configured": false, 00:17:25.407 "data_offset": 2048, 00:17:25.407 "data_size": 63488 00:17:25.407 }, 00:17:25.407 { 00:17:25.407 "name": "pt2", 00:17:25.407 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:25.407 "is_configured": true, 00:17:25.407 "data_offset": 2048, 00:17:25.407 "data_size": 63488 00:17:25.407 }, 00:17:25.407 { 00:17:25.407 "name": "pt3", 00:17:25.407 "uuid": "00000000-0000-0000-0000-000000000003", 00:17:25.407 "is_configured": true, 00:17:25.407 "data_offset": 2048, 00:17:25.407 "data_size": 63488 00:17:25.407 } 00:17:25.407 ] 00:17:25.407 }' 00:17:25.407 14:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:25.407 14:17:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.666 14:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:17:25.666 14:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:25.666 14:17:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.667 14:17:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.667 14:17:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.925 14:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:17:25.925 14:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:25.925 14:17:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.925 14:17:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:25.925 14:17:02 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:17:25.925 [2024-11-27 14:17:02.977149] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:25.925 14:17:02 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.925 14:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' 412cfcc9-4603-4668-a255-d69bd0a33e68 '!=' 412cfcc9-4603-4668-a255-d69bd0a33e68 ']' 00:17:25.925 14:17:03 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 81396 00:17:25.925 14:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 81396 ']' 00:17:25.925 14:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 81396 00:17:25.925 14:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:17:25.925 14:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:25.925 14:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81396 00:17:25.925 14:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:25.925 14:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:25.925 killing process with pid 81396 00:17:25.925 14:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81396' 00:17:25.925 14:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 81396 00:17:25.925 [2024-11-27 14:17:03.055362] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:25.925 14:17:03 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 81396 00:17:25.925 [2024-11-27 14:17:03.055496] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:25.925 [2024-11-27 14:17:03.055612] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:25.925 [2024-11-27 14:17:03.055643] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:17:26.184 [2024-11-27 14:17:03.336072] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:27.121 14:17:04 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:17:27.121 00:17:27.121 real 0m8.663s 00:17:27.121 user 0m14.180s 00:17:27.121 sys 0m1.210s 00:17:27.121 14:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:27.121 ************************************ 00:17:27.121 END TEST raid5f_superblock_test 00:17:27.121 ************************************ 00:17:27.121 14:17:04 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.381 14:17:04 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:17:27.381 14:17:04 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 3 false false true 00:17:27.381 14:17:04 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:27.381 14:17:04 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:27.381 14:17:04 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:27.381 ************************************ 00:17:27.381 START TEST raid5f_rebuild_test 00:17:27.381 ************************************ 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 false false true 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=81851 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 81851 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 81851 ']' 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:27.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:27.381 14:17:04 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:27.381 [2024-11-27 14:17:04.550882] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:17:27.381 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:27.381 Zero copy mechanism will not be used. 00:17:27.381 [2024-11-27 14:17:04.551048] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81851 ] 00:17:27.640 [2024-11-27 14:17:04.729616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:27.640 [2024-11-27 14:17:04.860392] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:27.899 [2024-11-27 14:17:05.067485] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:27.899 [2024-11-27 14:17:05.067572] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:28.467 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:28.467 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:17:28.467 14:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:28.467 14:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:28.467 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.467 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.467 BaseBdev1_malloc 00:17:28.467 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.467 14:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:28.467 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.467 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.467 [2024-11-27 14:17:05.565148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:28.467 [2024-11-27 14:17:05.565222] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.467 [2024-11-27 14:17:05.565254] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:28.467 [2024-11-27 14:17:05.565273] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.467 [2024-11-27 14:17:05.568408] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.467 [2024-11-27 14:17:05.568452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:28.467 BaseBdev1 00:17:28.467 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.467 14:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:28.467 14:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:28.467 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.467 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.467 BaseBdev2_malloc 00:17:28.467 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.467 14:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:28.467 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.467 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.467 [2024-11-27 14:17:05.618848] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:28.467 [2024-11-27 14:17:05.619068] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.467 [2024-11-27 14:17:05.619111] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:28.467 [2024-11-27 14:17:05.619130] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.467 [2024-11-27 14:17:05.622095] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.467 [2024-11-27 14:17:05.622260] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:28.467 BaseBdev2 00:17:28.467 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.467 14:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:28.467 14:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:28.467 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.467 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.467 BaseBdev3_malloc 00:17:28.467 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.467 14:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:28.467 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.467 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.467 [2024-11-27 14:17:05.679563] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:28.467 [2024-11-27 14:17:05.679810] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.467 [2024-11-27 14:17:05.679852] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:28.467 [2024-11-27 14:17:05.679872] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.467 [2024-11-27 14:17:05.682626] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.467 [2024-11-27 14:17:05.682671] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:28.467 BaseBdev3 00:17:28.467 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.468 14:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:28.468 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.468 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.468 spare_malloc 00:17:28.468 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.468 14:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:28.468 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.468 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.468 spare_delay 00:17:28.468 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.468 14:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:28.468 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.468 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.726 [2024-11-27 14:17:05.748532] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:28.726 [2024-11-27 14:17:05.748616] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.726 [2024-11-27 14:17:05.748658] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:28.726 [2024-11-27 14:17:05.748675] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.726 [2024-11-27 14:17:05.751645] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.726 [2024-11-27 14:17:05.751855] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:28.726 spare 00:17:28.726 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.726 14:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:17:28.726 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.726 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.726 [2024-11-27 14:17:05.760719] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:28.726 [2024-11-27 14:17:05.763225] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:28.726 [2024-11-27 14:17:05.763477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:28.726 [2024-11-27 14:17:05.763604] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:28.726 [2024-11-27 14:17:05.763622] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:17:28.726 [2024-11-27 14:17:05.764020] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:28.726 [2024-11-27 14:17:05.768974] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:28.726 [2024-11-27 14:17:05.769130] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:28.726 [2024-11-27 14:17:05.769510] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.726 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.726 14:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:28.726 14:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:28.726 14:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:28.726 14:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:28.726 14:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:28.726 14:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:28.726 14:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:28.726 14:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:28.727 14:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:28.727 14:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:28.727 14:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:28.727 14:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:28.727 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.727 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:28.727 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.727 14:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:28.727 "name": "raid_bdev1", 00:17:28.727 "uuid": "3bc98d96-9d6b-4a09-a937-7af81a6a144b", 00:17:28.727 "strip_size_kb": 64, 00:17:28.727 "state": "online", 00:17:28.727 "raid_level": "raid5f", 00:17:28.727 "superblock": false, 00:17:28.727 "num_base_bdevs": 3, 00:17:28.727 "num_base_bdevs_discovered": 3, 00:17:28.727 "num_base_bdevs_operational": 3, 00:17:28.727 "base_bdevs_list": [ 00:17:28.727 { 00:17:28.727 "name": "BaseBdev1", 00:17:28.727 "uuid": "ebfd466c-0843-5368-822d-7fa0d995e218", 00:17:28.727 "is_configured": true, 00:17:28.727 "data_offset": 0, 00:17:28.727 "data_size": 65536 00:17:28.727 }, 00:17:28.727 { 00:17:28.727 "name": "BaseBdev2", 00:17:28.727 "uuid": "ca556cc4-3ee1-5aeb-a681-ba6fbaad7a3f", 00:17:28.727 "is_configured": true, 00:17:28.727 "data_offset": 0, 00:17:28.727 "data_size": 65536 00:17:28.727 }, 00:17:28.727 { 00:17:28.727 "name": "BaseBdev3", 00:17:28.727 "uuid": "d3920569-86f6-5254-89cf-9c630045f69f", 00:17:28.727 "is_configured": true, 00:17:28.727 "data_offset": 0, 00:17:28.727 "data_size": 65536 00:17:28.727 } 00:17:28.727 ] 00:17:28.727 }' 00:17:28.727 14:17:05 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:28.727 14:17:05 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.294 14:17:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:29.294 14:17:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.294 14:17:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.294 14:17:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:29.294 [2024-11-27 14:17:06.295963] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:29.294 14:17:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.294 14:17:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=131072 00:17:29.294 14:17:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:29.294 14:17:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:29.294 14:17:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.294 14:17:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:29.294 14:17:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.294 14:17:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:17:29.294 14:17:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:29.294 14:17:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:29.294 14:17:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:29.294 14:17:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:29.294 14:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:29.294 14:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:29.294 14:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:29.294 14:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:29.294 14:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:29.294 14:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:29.294 14:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:29.294 14:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:29.294 14:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:29.552 [2024-11-27 14:17:06.627788] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:29.552 /dev/nbd0 00:17:29.552 14:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:29.552 14:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:29.552 14:17:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:29.552 14:17:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:29.552 14:17:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:29.553 14:17:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:29.553 14:17:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:29.553 14:17:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:29.553 14:17:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:29.553 14:17:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:29.553 14:17:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:29.553 1+0 records in 00:17:29.553 1+0 records out 00:17:29.553 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000563111 s, 7.3 MB/s 00:17:29.553 14:17:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:29.553 14:17:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:29.553 14:17:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:29.553 14:17:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:29.553 14:17:06 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:29.553 14:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:29.553 14:17:06 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:29.553 14:17:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:29.553 14:17:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:17:29.553 14:17:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 128 00:17:29.553 14:17:06 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=512 oflag=direct 00:17:30.120 512+0 records in 00:17:30.120 512+0 records out 00:17:30.120 67108864 bytes (67 MB, 64 MiB) copied, 0.418393 s, 160 MB/s 00:17:30.120 14:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:30.120 14:17:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:30.120 14:17:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:30.120 14:17:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:30.120 14:17:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:30.120 14:17:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:30.120 14:17:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:30.379 14:17:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:30.379 [2024-11-27 14:17:07.437197] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:30.379 14:17:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:30.379 14:17:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:30.379 14:17:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:30.380 14:17:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:30.380 14:17:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:30.380 14:17:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:30.380 14:17:07 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:30.380 14:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:30.380 14:17:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.380 14:17:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.380 [2024-11-27 14:17:07.452571] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:30.380 14:17:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.380 14:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:30.380 14:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:30.380 14:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:30.380 14:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:30.380 14:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:30.380 14:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:30.380 14:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:30.380 14:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:30.380 14:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:30.380 14:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:30.380 14:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:30.380 14:17:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.380 14:17:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.380 14:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:30.380 14:17:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.380 14:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:30.380 "name": "raid_bdev1", 00:17:30.380 "uuid": "3bc98d96-9d6b-4a09-a937-7af81a6a144b", 00:17:30.380 "strip_size_kb": 64, 00:17:30.380 "state": "online", 00:17:30.380 "raid_level": "raid5f", 00:17:30.380 "superblock": false, 00:17:30.380 "num_base_bdevs": 3, 00:17:30.380 "num_base_bdevs_discovered": 2, 00:17:30.380 "num_base_bdevs_operational": 2, 00:17:30.380 "base_bdevs_list": [ 00:17:30.380 { 00:17:30.380 "name": null, 00:17:30.380 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.380 "is_configured": false, 00:17:30.380 "data_offset": 0, 00:17:30.380 "data_size": 65536 00:17:30.380 }, 00:17:30.380 { 00:17:30.380 "name": "BaseBdev2", 00:17:30.380 "uuid": "ca556cc4-3ee1-5aeb-a681-ba6fbaad7a3f", 00:17:30.380 "is_configured": true, 00:17:30.380 "data_offset": 0, 00:17:30.380 "data_size": 65536 00:17:30.380 }, 00:17:30.380 { 00:17:30.380 "name": "BaseBdev3", 00:17:30.380 "uuid": "d3920569-86f6-5254-89cf-9c630045f69f", 00:17:30.380 "is_configured": true, 00:17:30.380 "data_offset": 0, 00:17:30.380 "data_size": 65536 00:17:30.380 } 00:17:30.380 ] 00:17:30.380 }' 00:17:30.380 14:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:30.380 14:17:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.947 14:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:30.947 14:17:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.947 14:17:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:30.947 [2024-11-27 14:17:07.968708] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:30.947 [2024-11-27 14:17:07.984590] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b680 00:17:30.947 14:17:07 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.947 14:17:07 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:30.947 [2024-11-27 14:17:07.991966] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:31.881 14:17:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:31.881 14:17:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:31.881 14:17:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:31.881 14:17:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:31.881 14:17:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:31.881 14:17:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:31.881 14:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.881 14:17:08 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:31.881 14:17:08 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.881 14:17:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.881 14:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:31.881 "name": "raid_bdev1", 00:17:31.881 "uuid": "3bc98d96-9d6b-4a09-a937-7af81a6a144b", 00:17:31.881 "strip_size_kb": 64, 00:17:31.881 "state": "online", 00:17:31.881 "raid_level": "raid5f", 00:17:31.881 "superblock": false, 00:17:31.882 "num_base_bdevs": 3, 00:17:31.882 "num_base_bdevs_discovered": 3, 00:17:31.882 "num_base_bdevs_operational": 3, 00:17:31.882 "process": { 00:17:31.882 "type": "rebuild", 00:17:31.882 "target": "spare", 00:17:31.882 "progress": { 00:17:31.882 "blocks": 18432, 00:17:31.882 "percent": 14 00:17:31.882 } 00:17:31.882 }, 00:17:31.882 "base_bdevs_list": [ 00:17:31.882 { 00:17:31.882 "name": "spare", 00:17:31.882 "uuid": "7887dfeb-1f5a-585c-b1db-0f007c721d08", 00:17:31.882 "is_configured": true, 00:17:31.882 "data_offset": 0, 00:17:31.882 "data_size": 65536 00:17:31.882 }, 00:17:31.882 { 00:17:31.882 "name": "BaseBdev2", 00:17:31.882 "uuid": "ca556cc4-3ee1-5aeb-a681-ba6fbaad7a3f", 00:17:31.882 "is_configured": true, 00:17:31.882 "data_offset": 0, 00:17:31.882 "data_size": 65536 00:17:31.882 }, 00:17:31.882 { 00:17:31.882 "name": "BaseBdev3", 00:17:31.882 "uuid": "d3920569-86f6-5254-89cf-9c630045f69f", 00:17:31.882 "is_configured": true, 00:17:31.882 "data_offset": 0, 00:17:31.882 "data_size": 65536 00:17:31.882 } 00:17:31.882 ] 00:17:31.882 }' 00:17:31.882 14:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:31.882 14:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:31.882 14:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:31.882 14:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:31.882 14:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:31.882 14:17:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.882 14:17:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:31.882 [2024-11-27 14:17:09.149553] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:32.140 [2024-11-27 14:17:09.205866] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:32.140 [2024-11-27 14:17:09.205947] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:32.140 [2024-11-27 14:17:09.205976] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:32.140 [2024-11-27 14:17:09.205988] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:32.140 14:17:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.140 14:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:32.140 14:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:32.140 14:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:32.140 14:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:32.140 14:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:32.140 14:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:32.140 14:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:32.140 14:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:32.140 14:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:32.140 14:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:32.140 14:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.140 14:17:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.140 14:17:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.140 14:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.140 14:17:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.140 14:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:32.140 "name": "raid_bdev1", 00:17:32.140 "uuid": "3bc98d96-9d6b-4a09-a937-7af81a6a144b", 00:17:32.140 "strip_size_kb": 64, 00:17:32.140 "state": "online", 00:17:32.140 "raid_level": "raid5f", 00:17:32.140 "superblock": false, 00:17:32.140 "num_base_bdevs": 3, 00:17:32.140 "num_base_bdevs_discovered": 2, 00:17:32.140 "num_base_bdevs_operational": 2, 00:17:32.140 "base_bdevs_list": [ 00:17:32.140 { 00:17:32.140 "name": null, 00:17:32.140 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.140 "is_configured": false, 00:17:32.140 "data_offset": 0, 00:17:32.140 "data_size": 65536 00:17:32.140 }, 00:17:32.140 { 00:17:32.140 "name": "BaseBdev2", 00:17:32.140 "uuid": "ca556cc4-3ee1-5aeb-a681-ba6fbaad7a3f", 00:17:32.140 "is_configured": true, 00:17:32.140 "data_offset": 0, 00:17:32.140 "data_size": 65536 00:17:32.140 }, 00:17:32.140 { 00:17:32.140 "name": "BaseBdev3", 00:17:32.140 "uuid": "d3920569-86f6-5254-89cf-9c630045f69f", 00:17:32.140 "is_configured": true, 00:17:32.140 "data_offset": 0, 00:17:32.140 "data_size": 65536 00:17:32.140 } 00:17:32.140 ] 00:17:32.140 }' 00:17:32.140 14:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:32.140 14:17:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.707 14:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:32.707 14:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:32.707 14:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:32.707 14:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:32.707 14:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:32.707 14:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:32.707 14:17:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.707 14:17:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.707 14:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:32.707 14:17:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.707 14:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:32.707 "name": "raid_bdev1", 00:17:32.707 "uuid": "3bc98d96-9d6b-4a09-a937-7af81a6a144b", 00:17:32.707 "strip_size_kb": 64, 00:17:32.707 "state": "online", 00:17:32.707 "raid_level": "raid5f", 00:17:32.707 "superblock": false, 00:17:32.707 "num_base_bdevs": 3, 00:17:32.707 "num_base_bdevs_discovered": 2, 00:17:32.707 "num_base_bdevs_operational": 2, 00:17:32.707 "base_bdevs_list": [ 00:17:32.707 { 00:17:32.707 "name": null, 00:17:32.707 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.707 "is_configured": false, 00:17:32.707 "data_offset": 0, 00:17:32.707 "data_size": 65536 00:17:32.707 }, 00:17:32.707 { 00:17:32.707 "name": "BaseBdev2", 00:17:32.707 "uuid": "ca556cc4-3ee1-5aeb-a681-ba6fbaad7a3f", 00:17:32.707 "is_configured": true, 00:17:32.707 "data_offset": 0, 00:17:32.707 "data_size": 65536 00:17:32.707 }, 00:17:32.707 { 00:17:32.707 "name": "BaseBdev3", 00:17:32.707 "uuid": "d3920569-86f6-5254-89cf-9c630045f69f", 00:17:32.707 "is_configured": true, 00:17:32.707 "data_offset": 0, 00:17:32.707 "data_size": 65536 00:17:32.707 } 00:17:32.707 ] 00:17:32.707 }' 00:17:32.707 14:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:32.707 14:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:32.707 14:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:32.707 14:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:32.707 14:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:32.707 14:17:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.707 14:17:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:32.707 [2024-11-27 14:17:09.921949] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:32.707 [2024-11-27 14:17:09.938236] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:17:32.707 14:17:09 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.707 14:17:09 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:32.707 [2024-11-27 14:17:09.945910] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:34.082 14:17:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:34.082 14:17:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:34.082 14:17:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:34.082 14:17:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:34.082 14:17:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:34.082 14:17:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.082 14:17:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.082 14:17:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.082 14:17:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.082 14:17:10 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.082 14:17:10 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.082 "name": "raid_bdev1", 00:17:34.082 "uuid": "3bc98d96-9d6b-4a09-a937-7af81a6a144b", 00:17:34.082 "strip_size_kb": 64, 00:17:34.082 "state": "online", 00:17:34.082 "raid_level": "raid5f", 00:17:34.082 "superblock": false, 00:17:34.082 "num_base_bdevs": 3, 00:17:34.082 "num_base_bdevs_discovered": 3, 00:17:34.082 "num_base_bdevs_operational": 3, 00:17:34.082 "process": { 00:17:34.082 "type": "rebuild", 00:17:34.082 "target": "spare", 00:17:34.082 "progress": { 00:17:34.082 "blocks": 18432, 00:17:34.082 "percent": 14 00:17:34.082 } 00:17:34.082 }, 00:17:34.082 "base_bdevs_list": [ 00:17:34.082 { 00:17:34.082 "name": "spare", 00:17:34.082 "uuid": "7887dfeb-1f5a-585c-b1db-0f007c721d08", 00:17:34.082 "is_configured": true, 00:17:34.082 "data_offset": 0, 00:17:34.082 "data_size": 65536 00:17:34.082 }, 00:17:34.082 { 00:17:34.082 "name": "BaseBdev2", 00:17:34.082 "uuid": "ca556cc4-3ee1-5aeb-a681-ba6fbaad7a3f", 00:17:34.082 "is_configured": true, 00:17:34.082 "data_offset": 0, 00:17:34.082 "data_size": 65536 00:17:34.082 }, 00:17:34.082 { 00:17:34.082 "name": "BaseBdev3", 00:17:34.082 "uuid": "d3920569-86f6-5254-89cf-9c630045f69f", 00:17:34.082 "is_configured": true, 00:17:34.082 "data_offset": 0, 00:17:34.082 "data_size": 65536 00:17:34.082 } 00:17:34.082 ] 00:17:34.082 }' 00:17:34.082 14:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.082 14:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:34.082 14:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.082 14:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:34.082 14:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:17:34.082 14:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:17:34.082 14:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:34.082 14:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=598 00:17:34.082 14:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:34.082 14:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:34.082 14:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:34.082 14:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:34.082 14:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:34.082 14:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:34.082 14:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:34.082 14:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:34.082 14:17:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.082 14:17:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:34.082 14:17:11 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.082 14:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:34.082 "name": "raid_bdev1", 00:17:34.082 "uuid": "3bc98d96-9d6b-4a09-a937-7af81a6a144b", 00:17:34.082 "strip_size_kb": 64, 00:17:34.082 "state": "online", 00:17:34.082 "raid_level": "raid5f", 00:17:34.082 "superblock": false, 00:17:34.082 "num_base_bdevs": 3, 00:17:34.082 "num_base_bdevs_discovered": 3, 00:17:34.082 "num_base_bdevs_operational": 3, 00:17:34.082 "process": { 00:17:34.082 "type": "rebuild", 00:17:34.082 "target": "spare", 00:17:34.082 "progress": { 00:17:34.082 "blocks": 22528, 00:17:34.082 "percent": 17 00:17:34.082 } 00:17:34.082 }, 00:17:34.082 "base_bdevs_list": [ 00:17:34.082 { 00:17:34.082 "name": "spare", 00:17:34.082 "uuid": "7887dfeb-1f5a-585c-b1db-0f007c721d08", 00:17:34.082 "is_configured": true, 00:17:34.082 "data_offset": 0, 00:17:34.082 "data_size": 65536 00:17:34.082 }, 00:17:34.082 { 00:17:34.082 "name": "BaseBdev2", 00:17:34.082 "uuid": "ca556cc4-3ee1-5aeb-a681-ba6fbaad7a3f", 00:17:34.082 "is_configured": true, 00:17:34.082 "data_offset": 0, 00:17:34.082 "data_size": 65536 00:17:34.082 }, 00:17:34.082 { 00:17:34.082 "name": "BaseBdev3", 00:17:34.082 "uuid": "d3920569-86f6-5254-89cf-9c630045f69f", 00:17:34.082 "is_configured": true, 00:17:34.082 "data_offset": 0, 00:17:34.082 "data_size": 65536 00:17:34.082 } 00:17:34.082 ] 00:17:34.083 }' 00:17:34.083 14:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:34.083 14:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:34.083 14:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:34.083 14:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:34.083 14:17:11 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:35.153 14:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:35.153 14:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:35.153 14:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:35.153 14:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:35.153 14:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:35.153 14:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:35.153 14:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:35.153 14:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:35.153 14:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.153 14:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:35.153 14:17:12 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.153 14:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:35.153 "name": "raid_bdev1", 00:17:35.153 "uuid": "3bc98d96-9d6b-4a09-a937-7af81a6a144b", 00:17:35.153 "strip_size_kb": 64, 00:17:35.153 "state": "online", 00:17:35.153 "raid_level": "raid5f", 00:17:35.153 "superblock": false, 00:17:35.153 "num_base_bdevs": 3, 00:17:35.153 "num_base_bdevs_discovered": 3, 00:17:35.153 "num_base_bdevs_operational": 3, 00:17:35.153 "process": { 00:17:35.153 "type": "rebuild", 00:17:35.153 "target": "spare", 00:17:35.153 "progress": { 00:17:35.153 "blocks": 45056, 00:17:35.153 "percent": 34 00:17:35.153 } 00:17:35.153 }, 00:17:35.153 "base_bdevs_list": [ 00:17:35.153 { 00:17:35.153 "name": "spare", 00:17:35.153 "uuid": "7887dfeb-1f5a-585c-b1db-0f007c721d08", 00:17:35.153 "is_configured": true, 00:17:35.153 "data_offset": 0, 00:17:35.153 "data_size": 65536 00:17:35.153 }, 00:17:35.153 { 00:17:35.153 "name": "BaseBdev2", 00:17:35.153 "uuid": "ca556cc4-3ee1-5aeb-a681-ba6fbaad7a3f", 00:17:35.153 "is_configured": true, 00:17:35.153 "data_offset": 0, 00:17:35.153 "data_size": 65536 00:17:35.153 }, 00:17:35.153 { 00:17:35.153 "name": "BaseBdev3", 00:17:35.153 "uuid": "d3920569-86f6-5254-89cf-9c630045f69f", 00:17:35.153 "is_configured": true, 00:17:35.153 "data_offset": 0, 00:17:35.153 "data_size": 65536 00:17:35.153 } 00:17:35.153 ] 00:17:35.153 }' 00:17:35.153 14:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:35.153 14:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:35.153 14:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:35.153 14:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:35.153 14:17:12 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:36.530 14:17:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:36.530 14:17:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:36.530 14:17:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:36.530 14:17:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:36.530 14:17:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:36.530 14:17:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:36.530 14:17:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:36.530 14:17:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.530 14:17:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:36.530 14:17:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:36.530 14:17:13 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.531 14:17:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:36.531 "name": "raid_bdev1", 00:17:36.531 "uuid": "3bc98d96-9d6b-4a09-a937-7af81a6a144b", 00:17:36.531 "strip_size_kb": 64, 00:17:36.531 "state": "online", 00:17:36.531 "raid_level": "raid5f", 00:17:36.531 "superblock": false, 00:17:36.531 "num_base_bdevs": 3, 00:17:36.531 "num_base_bdevs_discovered": 3, 00:17:36.531 "num_base_bdevs_operational": 3, 00:17:36.531 "process": { 00:17:36.531 "type": "rebuild", 00:17:36.531 "target": "spare", 00:17:36.531 "progress": { 00:17:36.531 "blocks": 69632, 00:17:36.531 "percent": 53 00:17:36.531 } 00:17:36.531 }, 00:17:36.531 "base_bdevs_list": [ 00:17:36.531 { 00:17:36.531 "name": "spare", 00:17:36.531 "uuid": "7887dfeb-1f5a-585c-b1db-0f007c721d08", 00:17:36.531 "is_configured": true, 00:17:36.531 "data_offset": 0, 00:17:36.531 "data_size": 65536 00:17:36.531 }, 00:17:36.531 { 00:17:36.531 "name": "BaseBdev2", 00:17:36.531 "uuid": "ca556cc4-3ee1-5aeb-a681-ba6fbaad7a3f", 00:17:36.531 "is_configured": true, 00:17:36.531 "data_offset": 0, 00:17:36.531 "data_size": 65536 00:17:36.531 }, 00:17:36.531 { 00:17:36.531 "name": "BaseBdev3", 00:17:36.531 "uuid": "d3920569-86f6-5254-89cf-9c630045f69f", 00:17:36.531 "is_configured": true, 00:17:36.531 "data_offset": 0, 00:17:36.531 "data_size": 65536 00:17:36.531 } 00:17:36.531 ] 00:17:36.531 }' 00:17:36.531 14:17:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:36.531 14:17:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:36.531 14:17:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:36.531 14:17:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:36.531 14:17:13 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:37.467 14:17:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:37.467 14:17:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:37.467 14:17:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:37.467 14:17:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:37.467 14:17:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:37.467 14:17:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:37.467 14:17:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:37.467 14:17:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:37.467 14:17:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.467 14:17:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:37.467 14:17:14 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.467 14:17:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:37.467 "name": "raid_bdev1", 00:17:37.467 "uuid": "3bc98d96-9d6b-4a09-a937-7af81a6a144b", 00:17:37.467 "strip_size_kb": 64, 00:17:37.467 "state": "online", 00:17:37.467 "raid_level": "raid5f", 00:17:37.467 "superblock": false, 00:17:37.467 "num_base_bdevs": 3, 00:17:37.467 "num_base_bdevs_discovered": 3, 00:17:37.467 "num_base_bdevs_operational": 3, 00:17:37.467 "process": { 00:17:37.467 "type": "rebuild", 00:17:37.467 "target": "spare", 00:17:37.467 "progress": { 00:17:37.467 "blocks": 92160, 00:17:37.467 "percent": 70 00:17:37.467 } 00:17:37.467 }, 00:17:37.467 "base_bdevs_list": [ 00:17:37.467 { 00:17:37.468 "name": "spare", 00:17:37.468 "uuid": "7887dfeb-1f5a-585c-b1db-0f007c721d08", 00:17:37.468 "is_configured": true, 00:17:37.468 "data_offset": 0, 00:17:37.468 "data_size": 65536 00:17:37.468 }, 00:17:37.468 { 00:17:37.468 "name": "BaseBdev2", 00:17:37.468 "uuid": "ca556cc4-3ee1-5aeb-a681-ba6fbaad7a3f", 00:17:37.468 "is_configured": true, 00:17:37.468 "data_offset": 0, 00:17:37.468 "data_size": 65536 00:17:37.468 }, 00:17:37.468 { 00:17:37.468 "name": "BaseBdev3", 00:17:37.468 "uuid": "d3920569-86f6-5254-89cf-9c630045f69f", 00:17:37.468 "is_configured": true, 00:17:37.468 "data_offset": 0, 00:17:37.468 "data_size": 65536 00:17:37.468 } 00:17:37.468 ] 00:17:37.468 }' 00:17:37.468 14:17:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:37.468 14:17:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:37.468 14:17:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:37.468 14:17:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:37.468 14:17:14 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:38.866 14:17:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:38.866 14:17:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:38.866 14:17:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:38.866 14:17:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:38.866 14:17:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:38.866 14:17:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:38.866 14:17:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:38.866 14:17:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.866 14:17:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:38.866 14:17:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:38.866 14:17:15 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.866 14:17:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:38.866 "name": "raid_bdev1", 00:17:38.866 "uuid": "3bc98d96-9d6b-4a09-a937-7af81a6a144b", 00:17:38.866 "strip_size_kb": 64, 00:17:38.866 "state": "online", 00:17:38.866 "raid_level": "raid5f", 00:17:38.866 "superblock": false, 00:17:38.866 "num_base_bdevs": 3, 00:17:38.866 "num_base_bdevs_discovered": 3, 00:17:38.866 "num_base_bdevs_operational": 3, 00:17:38.866 "process": { 00:17:38.866 "type": "rebuild", 00:17:38.866 "target": "spare", 00:17:38.866 "progress": { 00:17:38.866 "blocks": 116736, 00:17:38.866 "percent": 89 00:17:38.866 } 00:17:38.866 }, 00:17:38.866 "base_bdevs_list": [ 00:17:38.866 { 00:17:38.866 "name": "spare", 00:17:38.866 "uuid": "7887dfeb-1f5a-585c-b1db-0f007c721d08", 00:17:38.866 "is_configured": true, 00:17:38.866 "data_offset": 0, 00:17:38.866 "data_size": 65536 00:17:38.866 }, 00:17:38.866 { 00:17:38.866 "name": "BaseBdev2", 00:17:38.866 "uuid": "ca556cc4-3ee1-5aeb-a681-ba6fbaad7a3f", 00:17:38.866 "is_configured": true, 00:17:38.866 "data_offset": 0, 00:17:38.866 "data_size": 65536 00:17:38.866 }, 00:17:38.866 { 00:17:38.866 "name": "BaseBdev3", 00:17:38.866 "uuid": "d3920569-86f6-5254-89cf-9c630045f69f", 00:17:38.866 "is_configured": true, 00:17:38.866 "data_offset": 0, 00:17:38.866 "data_size": 65536 00:17:38.866 } 00:17:38.866 ] 00:17:38.866 }' 00:17:38.866 14:17:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:38.866 14:17:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:38.866 14:17:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:38.866 14:17:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:38.866 14:17:15 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:39.433 [2024-11-27 14:17:16.427034] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:39.433 [2024-11-27 14:17:16.427536] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:39.433 [2024-11-27 14:17:16.427624] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:39.692 14:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:39.692 14:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:39.692 14:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.692 14:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:39.692 14:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:39.692 14:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.692 14:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.692 14:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.692 14:17:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.692 14:17:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.692 14:17:16 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.692 14:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.692 "name": "raid_bdev1", 00:17:39.692 "uuid": "3bc98d96-9d6b-4a09-a937-7af81a6a144b", 00:17:39.692 "strip_size_kb": 64, 00:17:39.692 "state": "online", 00:17:39.692 "raid_level": "raid5f", 00:17:39.692 "superblock": false, 00:17:39.692 "num_base_bdevs": 3, 00:17:39.692 "num_base_bdevs_discovered": 3, 00:17:39.692 "num_base_bdevs_operational": 3, 00:17:39.692 "base_bdevs_list": [ 00:17:39.692 { 00:17:39.692 "name": "spare", 00:17:39.692 "uuid": "7887dfeb-1f5a-585c-b1db-0f007c721d08", 00:17:39.692 "is_configured": true, 00:17:39.692 "data_offset": 0, 00:17:39.692 "data_size": 65536 00:17:39.692 }, 00:17:39.692 { 00:17:39.692 "name": "BaseBdev2", 00:17:39.692 "uuid": "ca556cc4-3ee1-5aeb-a681-ba6fbaad7a3f", 00:17:39.692 "is_configured": true, 00:17:39.692 "data_offset": 0, 00:17:39.692 "data_size": 65536 00:17:39.692 }, 00:17:39.692 { 00:17:39.692 "name": "BaseBdev3", 00:17:39.692 "uuid": "d3920569-86f6-5254-89cf-9c630045f69f", 00:17:39.692 "is_configured": true, 00:17:39.692 "data_offset": 0, 00:17:39.692 "data_size": 65536 00:17:39.692 } 00:17:39.692 ] 00:17:39.692 }' 00:17:39.951 14:17:16 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.951 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:39.951 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:39.951 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:39.951 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:17:39.951 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:39.951 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:39.951 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:39.951 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:39.951 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:39.951 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:39.951 14:17:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.951 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:39.951 14:17:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:39.951 14:17:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.951 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:39.951 "name": "raid_bdev1", 00:17:39.951 "uuid": "3bc98d96-9d6b-4a09-a937-7af81a6a144b", 00:17:39.951 "strip_size_kb": 64, 00:17:39.951 "state": "online", 00:17:39.951 "raid_level": "raid5f", 00:17:39.951 "superblock": false, 00:17:39.951 "num_base_bdevs": 3, 00:17:39.951 "num_base_bdevs_discovered": 3, 00:17:39.951 "num_base_bdevs_operational": 3, 00:17:39.951 "base_bdevs_list": [ 00:17:39.951 { 00:17:39.951 "name": "spare", 00:17:39.951 "uuid": "7887dfeb-1f5a-585c-b1db-0f007c721d08", 00:17:39.951 "is_configured": true, 00:17:39.951 "data_offset": 0, 00:17:39.951 "data_size": 65536 00:17:39.951 }, 00:17:39.951 { 00:17:39.951 "name": "BaseBdev2", 00:17:39.951 "uuid": "ca556cc4-3ee1-5aeb-a681-ba6fbaad7a3f", 00:17:39.951 "is_configured": true, 00:17:39.951 "data_offset": 0, 00:17:39.951 "data_size": 65536 00:17:39.951 }, 00:17:39.951 { 00:17:39.951 "name": "BaseBdev3", 00:17:39.951 "uuid": "d3920569-86f6-5254-89cf-9c630045f69f", 00:17:39.951 "is_configured": true, 00:17:39.951 "data_offset": 0, 00:17:39.951 "data_size": 65536 00:17:39.951 } 00:17:39.951 ] 00:17:39.951 }' 00:17:39.951 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:39.951 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:39.951 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:40.210 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:40.210 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:40.210 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:40.210 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:40.210 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:40.210 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:40.210 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:40.210 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:40.210 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:40.210 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:40.210 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:40.210 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:40.210 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.210 14:17:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.210 14:17:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.210 14:17:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.210 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:40.210 "name": "raid_bdev1", 00:17:40.210 "uuid": "3bc98d96-9d6b-4a09-a937-7af81a6a144b", 00:17:40.210 "strip_size_kb": 64, 00:17:40.210 "state": "online", 00:17:40.210 "raid_level": "raid5f", 00:17:40.210 "superblock": false, 00:17:40.210 "num_base_bdevs": 3, 00:17:40.210 "num_base_bdevs_discovered": 3, 00:17:40.210 "num_base_bdevs_operational": 3, 00:17:40.210 "base_bdevs_list": [ 00:17:40.210 { 00:17:40.210 "name": "spare", 00:17:40.210 "uuid": "7887dfeb-1f5a-585c-b1db-0f007c721d08", 00:17:40.210 "is_configured": true, 00:17:40.210 "data_offset": 0, 00:17:40.210 "data_size": 65536 00:17:40.210 }, 00:17:40.210 { 00:17:40.210 "name": "BaseBdev2", 00:17:40.210 "uuid": "ca556cc4-3ee1-5aeb-a681-ba6fbaad7a3f", 00:17:40.210 "is_configured": true, 00:17:40.210 "data_offset": 0, 00:17:40.210 "data_size": 65536 00:17:40.210 }, 00:17:40.210 { 00:17:40.210 "name": "BaseBdev3", 00:17:40.210 "uuid": "d3920569-86f6-5254-89cf-9c630045f69f", 00:17:40.210 "is_configured": true, 00:17:40.210 "data_offset": 0, 00:17:40.210 "data_size": 65536 00:17:40.210 } 00:17:40.210 ] 00:17:40.210 }' 00:17:40.210 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:40.210 14:17:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.778 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:40.778 14:17:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.778 14:17:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.778 [2024-11-27 14:17:17.764881] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:40.778 [2024-11-27 14:17:17.764914] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:40.778 [2024-11-27 14:17:17.765022] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:40.778 [2024-11-27 14:17:17.765134] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:40.778 [2024-11-27 14:17:17.765181] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:40.778 14:17:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.778 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:40.778 14:17:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.778 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:17:40.778 14:17:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:40.778 14:17:17 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.778 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:40.778 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:40.778 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:40.778 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:40.778 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:40.778 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:40.778 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:40.778 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:40.778 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:40.778 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:17:40.778 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:40.778 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:40.778 14:17:17 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:41.037 /dev/nbd0 00:17:41.037 14:17:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:41.037 14:17:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:41.037 14:17:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:41.037 14:17:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:41.037 14:17:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:41.037 14:17:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:41.037 14:17:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:41.037 14:17:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:41.037 14:17:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:41.037 14:17:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:41.037 14:17:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:41.037 1+0 records in 00:17:41.037 1+0 records out 00:17:41.037 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000234351 s, 17.5 MB/s 00:17:41.037 14:17:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:41.037 14:17:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:41.037 14:17:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:41.037 14:17:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:41.037 14:17:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:41.037 14:17:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:41.037 14:17:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:41.037 14:17:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:41.296 /dev/nbd1 00:17:41.296 14:17:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:41.296 14:17:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:41.296 14:17:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:41.296 14:17:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:17:41.296 14:17:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:41.296 14:17:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:41.296 14:17:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:41.296 14:17:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:17:41.296 14:17:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:41.296 14:17:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:41.296 14:17:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:41.296 1+0 records in 00:17:41.296 1+0 records out 00:17:41.296 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000328185 s, 12.5 MB/s 00:17:41.296 14:17:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:41.296 14:17:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:17:41.296 14:17:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:41.296 14:17:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:41.296 14:17:18 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:17:41.296 14:17:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:41.296 14:17:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:41.296 14:17:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:17:41.555 14:17:18 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:41.555 14:17:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:41.555 14:17:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:41.555 14:17:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:41.555 14:17:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:17:41.555 14:17:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:41.555 14:17:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:41.813 14:17:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:41.813 14:17:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:41.813 14:17:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:41.813 14:17:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:41.813 14:17:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:41.813 14:17:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:41.813 14:17:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:41.813 14:17:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:41.813 14:17:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:41.813 14:17:18 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:42.072 14:17:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:42.072 14:17:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:42.072 14:17:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:42.072 14:17:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:42.072 14:17:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:42.072 14:17:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:42.072 14:17:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:17:42.072 14:17:19 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:17:42.072 14:17:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:17:42.072 14:17:19 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 81851 00:17:42.072 14:17:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 81851 ']' 00:17:42.072 14:17:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 81851 00:17:42.072 14:17:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:17:42.072 14:17:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:42.072 14:17:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 81851 00:17:42.072 14:17:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:42.072 14:17:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:42.072 killing process with pid 81851 00:17:42.072 14:17:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 81851' 00:17:42.072 14:17:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 81851 00:17:42.072 Received shutdown signal, test time was about 60.000000 seconds 00:17:42.072 00:17:42.072 Latency(us) 00:17:42.072 [2024-11-27T14:17:19.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.072 [2024-11-27T14:17:19.350Z] =================================================================================================================== 00:17:42.072 [2024-11-27T14:17:19.350Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:42.072 [2024-11-27 14:17:19.305998] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:42.072 14:17:19 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 81851 00:17:42.641 [2024-11-27 14:17:19.658341] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:43.579 14:17:20 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:17:43.579 00:17:43.579 real 0m16.254s 00:17:43.579 user 0m20.776s 00:17:43.579 sys 0m1.937s 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:43.580 ************************************ 00:17:43.580 END TEST raid5f_rebuild_test 00:17:43.580 ************************************ 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:17:43.580 14:17:20 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 3 true false true 00:17:43.580 14:17:20 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:17:43.580 14:17:20 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:43.580 14:17:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:43.580 ************************************ 00:17:43.580 START TEST raid5f_rebuild_test_sb 00:17:43.580 ************************************ 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 3 true false true 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=3 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:17:43.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=82296 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 82296 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 82296 ']' 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:43.580 14:17:20 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:43.839 [2024-11-27 14:17:20.857894] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:17:43.840 [2024-11-27 14:17:20.858295] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82296 ] 00:17:43.840 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:43.840 Zero copy mechanism will not be used. 00:17:43.840 [2024-11-27 14:17:21.046938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.098 [2024-11-27 14:17:21.205597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.357 [2024-11-27 14:17:21.419155] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:44.357 [2024-11-27 14:17:21.419391] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:44.617 14:17:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:44.617 14:17:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:17:44.617 14:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:44.617 14:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:17:44.617 14:17:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.617 14:17:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.876 BaseBdev1_malloc 00:17:44.877 14:17:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.877 14:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:44.877 14:17:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.877 14:17:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.877 [2024-11-27 14:17:21.911423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:44.877 [2024-11-27 14:17:21.911674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.877 [2024-11-27 14:17:21.911718] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:17:44.877 [2024-11-27 14:17:21.911739] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.877 [2024-11-27 14:17:21.914521] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.877 [2024-11-27 14:17:21.914587] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:44.877 BaseBdev1 00:17:44.877 14:17:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.877 14:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:44.877 14:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:17:44.877 14:17:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.877 14:17:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.877 BaseBdev2_malloc 00:17:44.877 14:17:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.877 14:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:44.877 14:17:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.877 14:17:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.877 [2024-11-27 14:17:21.963957] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:44.877 [2024-11-27 14:17:21.964051] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.877 [2024-11-27 14:17:21.964086] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:17:44.877 [2024-11-27 14:17:21.964119] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.877 [2024-11-27 14:17:21.966955] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.877 [2024-11-27 14:17:21.967145] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:44.877 BaseBdev2 00:17:44.877 14:17:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.877 14:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:17:44.877 14:17:21 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:17:44.877 14:17:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.877 14:17:21 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.877 BaseBdev3_malloc 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.877 [2024-11-27 14:17:22.029182] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:17:44.877 [2024-11-27 14:17:22.029286] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.877 [2024-11-27 14:17:22.029321] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:17:44.877 [2024-11-27 14:17:22.029340] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.877 [2024-11-27 14:17:22.032203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.877 [2024-11-27 14:17:22.032251] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:17:44.877 BaseBdev3 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.877 spare_malloc 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.877 spare_delay 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.877 [2024-11-27 14:17:22.088711] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:44.877 [2024-11-27 14:17:22.088793] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:44.877 [2024-11-27 14:17:22.088823] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009c80 00:17:44.877 [2024-11-27 14:17:22.088840] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:44.877 [2024-11-27 14:17:22.091608] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:44.877 [2024-11-27 14:17:22.091806] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:44.877 spare 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3'\''' -n raid_bdev1 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.877 [2024-11-27 14:17:22.096823] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:44.877 [2024-11-27 14:17:22.099200] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:44.877 [2024-11-27 14:17:22.099424] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:44.877 [2024-11-27 14:17:22.099682] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:17:44.877 [2024-11-27 14:17:22.099702] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:44.877 [2024-11-27 14:17:22.100047] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:17:44.877 [2024-11-27 14:17:22.105251] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:17:44.877 [2024-11-27 14:17:22.105286] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:17:44.877 [2024-11-27 14:17:22.105529] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:44.877 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.136 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:45.136 "name": "raid_bdev1", 00:17:45.136 "uuid": "a72b4974-a22e-45b6-80a3-ba73721438fb", 00:17:45.136 "strip_size_kb": 64, 00:17:45.136 "state": "online", 00:17:45.136 "raid_level": "raid5f", 00:17:45.136 "superblock": true, 00:17:45.136 "num_base_bdevs": 3, 00:17:45.136 "num_base_bdevs_discovered": 3, 00:17:45.136 "num_base_bdevs_operational": 3, 00:17:45.136 "base_bdevs_list": [ 00:17:45.136 { 00:17:45.136 "name": "BaseBdev1", 00:17:45.136 "uuid": "1a4582e1-2859-53be-be60-5cb47e2817d2", 00:17:45.136 "is_configured": true, 00:17:45.136 "data_offset": 2048, 00:17:45.136 "data_size": 63488 00:17:45.136 }, 00:17:45.136 { 00:17:45.136 "name": "BaseBdev2", 00:17:45.136 "uuid": "07071152-ff86-5023-9d97-d0cb5fd25123", 00:17:45.136 "is_configured": true, 00:17:45.136 "data_offset": 2048, 00:17:45.136 "data_size": 63488 00:17:45.136 }, 00:17:45.136 { 00:17:45.136 "name": "BaseBdev3", 00:17:45.136 "uuid": "bcedbe94-e3fe-5111-b2d9-0757fa33e4f0", 00:17:45.136 "is_configured": true, 00:17:45.136 "data_offset": 2048, 00:17:45.136 "data_size": 63488 00:17:45.136 } 00:17:45.136 ] 00:17:45.136 }' 00:17:45.136 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:45.136 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.395 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:17:45.395 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.395 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.395 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:17:45.395 [2024-11-27 14:17:22.631570] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:45.395 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.654 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=126976 00:17:45.654 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:45.654 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:45.654 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.654 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:45.654 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.654 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:17:45.654 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:17:45.654 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:17:45.654 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:17:45.655 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:17:45.655 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:45.655 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:17:45.655 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:45.655 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:17:45.655 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:45.655 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:45.655 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:45.655 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:45.655 14:17:22 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:17:45.914 [2024-11-27 14:17:23.023523] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:17:45.914 /dev/nbd0 00:17:45.914 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:45.914 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:45.914 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:45.914 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:45.914 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:45.914 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:45.914 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:45.914 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:45.914 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:45.914 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:45.914 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:45.914 1+0 records in 00:17:45.914 1+0 records out 00:17:45.914 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00025887 s, 15.8 MB/s 00:17:45.914 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:45.914 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:45.914 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:45.914 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:45.914 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:45.914 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:45.914 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:17:45.914 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:17:45.914 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=256 00:17:45.914 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 128 00:17:45.914 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=131072 count=496 oflag=direct 00:17:46.484 496+0 records in 00:17:46.484 496+0 records out 00:17:46.484 65011712 bytes (65 MB, 62 MiB) copied, 0.483462 s, 134 MB/s 00:17:46.484 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:17:46.484 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:46.484 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:17:46.484 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:46.484 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:46.484 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:46.484 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:46.743 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:46.743 [2024-11-27 14:17:23.885475] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:46.743 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:46.743 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:46.743 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:46.743 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:46.743 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:46.743 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:46.743 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:46.743 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:17:46.743 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.743 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.743 [2024-11-27 14:17:23.903296] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:46.743 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.743 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:46.743 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:46.743 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:46.743 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:46.743 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:46.743 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:46.743 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:46.743 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:46.743 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:46.743 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:46.743 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:46.743 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.743 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.743 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:46.743 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.743 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:46.743 "name": "raid_bdev1", 00:17:46.743 "uuid": "a72b4974-a22e-45b6-80a3-ba73721438fb", 00:17:46.743 "strip_size_kb": 64, 00:17:46.743 "state": "online", 00:17:46.743 "raid_level": "raid5f", 00:17:46.743 "superblock": true, 00:17:46.743 "num_base_bdevs": 3, 00:17:46.743 "num_base_bdevs_discovered": 2, 00:17:46.743 "num_base_bdevs_operational": 2, 00:17:46.743 "base_bdevs_list": [ 00:17:46.743 { 00:17:46.743 "name": null, 00:17:46.743 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:46.743 "is_configured": false, 00:17:46.743 "data_offset": 0, 00:17:46.743 "data_size": 63488 00:17:46.743 }, 00:17:46.743 { 00:17:46.743 "name": "BaseBdev2", 00:17:46.743 "uuid": "07071152-ff86-5023-9d97-d0cb5fd25123", 00:17:46.743 "is_configured": true, 00:17:46.743 "data_offset": 2048, 00:17:46.743 "data_size": 63488 00:17:46.743 }, 00:17:46.743 { 00:17:46.743 "name": "BaseBdev3", 00:17:46.743 "uuid": "bcedbe94-e3fe-5111-b2d9-0757fa33e4f0", 00:17:46.743 "is_configured": true, 00:17:46.743 "data_offset": 2048, 00:17:46.743 "data_size": 63488 00:17:46.743 } 00:17:46.743 ] 00:17:46.743 }' 00:17:46.743 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:46.743 14:17:23 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.310 14:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:47.310 14:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.310 14:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:47.310 [2024-11-27 14:17:24.355435] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:47.310 [2024-11-27 14:17:24.370709] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000028f80 00:17:47.310 14:17:24 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.310 14:17:24 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:17:47.310 [2024-11-27 14:17:24.378157] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:48.246 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:48.246 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:48.246 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:48.246 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:48.246 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:48.246 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.246 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.246 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.246 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.246 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.246 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:48.246 "name": "raid_bdev1", 00:17:48.246 "uuid": "a72b4974-a22e-45b6-80a3-ba73721438fb", 00:17:48.246 "strip_size_kb": 64, 00:17:48.246 "state": "online", 00:17:48.246 "raid_level": "raid5f", 00:17:48.246 "superblock": true, 00:17:48.246 "num_base_bdevs": 3, 00:17:48.246 "num_base_bdevs_discovered": 3, 00:17:48.246 "num_base_bdevs_operational": 3, 00:17:48.246 "process": { 00:17:48.246 "type": "rebuild", 00:17:48.246 "target": "spare", 00:17:48.246 "progress": { 00:17:48.246 "blocks": 18432, 00:17:48.246 "percent": 14 00:17:48.246 } 00:17:48.246 }, 00:17:48.246 "base_bdevs_list": [ 00:17:48.246 { 00:17:48.246 "name": "spare", 00:17:48.246 "uuid": "d8ff7d5e-3379-5e11-afe9-1a376f547e1f", 00:17:48.246 "is_configured": true, 00:17:48.246 "data_offset": 2048, 00:17:48.246 "data_size": 63488 00:17:48.246 }, 00:17:48.246 { 00:17:48.246 "name": "BaseBdev2", 00:17:48.246 "uuid": "07071152-ff86-5023-9d97-d0cb5fd25123", 00:17:48.246 "is_configured": true, 00:17:48.246 "data_offset": 2048, 00:17:48.246 "data_size": 63488 00:17:48.246 }, 00:17:48.246 { 00:17:48.246 "name": "BaseBdev3", 00:17:48.246 "uuid": "bcedbe94-e3fe-5111-b2d9-0757fa33e4f0", 00:17:48.246 "is_configured": true, 00:17:48.246 "data_offset": 2048, 00:17:48.246 "data_size": 63488 00:17:48.246 } 00:17:48.246 ] 00:17:48.246 }' 00:17:48.246 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:48.246 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:48.246 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:48.505 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:48.505 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:48.505 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.505 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.505 [2024-11-27 14:17:25.539645] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:48.505 [2024-11-27 14:17:25.593162] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:17:48.505 [2024-11-27 14:17:25.593236] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:48.505 [2024-11-27 14:17:25.593267] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:48.505 [2024-11-27 14:17:25.593279] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:17:48.505 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.505 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:48.505 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:48.505 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:48.505 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:48.505 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:48.505 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:48.505 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:48.505 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:48.505 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:48.505 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:48.505 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:48.506 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.506 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:48.506 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:48.506 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.506 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:48.506 "name": "raid_bdev1", 00:17:48.506 "uuid": "a72b4974-a22e-45b6-80a3-ba73721438fb", 00:17:48.506 "strip_size_kb": 64, 00:17:48.506 "state": "online", 00:17:48.506 "raid_level": "raid5f", 00:17:48.506 "superblock": true, 00:17:48.506 "num_base_bdevs": 3, 00:17:48.506 "num_base_bdevs_discovered": 2, 00:17:48.506 "num_base_bdevs_operational": 2, 00:17:48.506 "base_bdevs_list": [ 00:17:48.506 { 00:17:48.506 "name": null, 00:17:48.506 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:48.506 "is_configured": false, 00:17:48.506 "data_offset": 0, 00:17:48.506 "data_size": 63488 00:17:48.506 }, 00:17:48.506 { 00:17:48.506 "name": "BaseBdev2", 00:17:48.506 "uuid": "07071152-ff86-5023-9d97-d0cb5fd25123", 00:17:48.506 "is_configured": true, 00:17:48.506 "data_offset": 2048, 00:17:48.506 "data_size": 63488 00:17:48.506 }, 00:17:48.506 { 00:17:48.506 "name": "BaseBdev3", 00:17:48.506 "uuid": "bcedbe94-e3fe-5111-b2d9-0757fa33e4f0", 00:17:48.506 "is_configured": true, 00:17:48.506 "data_offset": 2048, 00:17:48.506 "data_size": 63488 00:17:48.506 } 00:17:48.506 ] 00:17:48.506 }' 00:17:48.506 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:48.506 14:17:25 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.074 14:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:49.074 14:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:49.074 14:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:49.074 14:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:49.074 14:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:49.075 14:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:49.075 14:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.075 14:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.075 14:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.075 14:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.075 14:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:49.075 "name": "raid_bdev1", 00:17:49.075 "uuid": "a72b4974-a22e-45b6-80a3-ba73721438fb", 00:17:49.075 "strip_size_kb": 64, 00:17:49.075 "state": "online", 00:17:49.075 "raid_level": "raid5f", 00:17:49.075 "superblock": true, 00:17:49.075 "num_base_bdevs": 3, 00:17:49.075 "num_base_bdevs_discovered": 2, 00:17:49.075 "num_base_bdevs_operational": 2, 00:17:49.075 "base_bdevs_list": [ 00:17:49.075 { 00:17:49.075 "name": null, 00:17:49.075 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.075 "is_configured": false, 00:17:49.075 "data_offset": 0, 00:17:49.075 "data_size": 63488 00:17:49.075 }, 00:17:49.075 { 00:17:49.075 "name": "BaseBdev2", 00:17:49.075 "uuid": "07071152-ff86-5023-9d97-d0cb5fd25123", 00:17:49.075 "is_configured": true, 00:17:49.075 "data_offset": 2048, 00:17:49.075 "data_size": 63488 00:17:49.075 }, 00:17:49.075 { 00:17:49.075 "name": "BaseBdev3", 00:17:49.075 "uuid": "bcedbe94-e3fe-5111-b2d9-0757fa33e4f0", 00:17:49.075 "is_configured": true, 00:17:49.075 "data_offset": 2048, 00:17:49.075 "data_size": 63488 00:17:49.075 } 00:17:49.075 ] 00:17:49.075 }' 00:17:49.075 14:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:49.075 14:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:49.075 14:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:49.075 14:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:49.075 14:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:17:49.075 14:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.075 14:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:49.075 [2024-11-27 14:17:26.288436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:49.075 [2024-11-27 14:17:26.302917] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000029050 00:17:49.075 14:17:26 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.075 14:17:26 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:17:49.075 [2024-11-27 14:17:26.310134] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:50.458 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:50.458 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:50.458 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:50.458 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:50.458 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:50.458 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.458 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.458 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.458 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.458 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.458 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:50.458 "name": "raid_bdev1", 00:17:50.458 "uuid": "a72b4974-a22e-45b6-80a3-ba73721438fb", 00:17:50.458 "strip_size_kb": 64, 00:17:50.458 "state": "online", 00:17:50.458 "raid_level": "raid5f", 00:17:50.458 "superblock": true, 00:17:50.458 "num_base_bdevs": 3, 00:17:50.458 "num_base_bdevs_discovered": 3, 00:17:50.458 "num_base_bdevs_operational": 3, 00:17:50.458 "process": { 00:17:50.458 "type": "rebuild", 00:17:50.458 "target": "spare", 00:17:50.458 "progress": { 00:17:50.458 "blocks": 18432, 00:17:50.458 "percent": 14 00:17:50.458 } 00:17:50.458 }, 00:17:50.458 "base_bdevs_list": [ 00:17:50.458 { 00:17:50.458 "name": "spare", 00:17:50.458 "uuid": "d8ff7d5e-3379-5e11-afe9-1a376f547e1f", 00:17:50.458 "is_configured": true, 00:17:50.458 "data_offset": 2048, 00:17:50.458 "data_size": 63488 00:17:50.458 }, 00:17:50.458 { 00:17:50.458 "name": "BaseBdev2", 00:17:50.458 "uuid": "07071152-ff86-5023-9d97-d0cb5fd25123", 00:17:50.458 "is_configured": true, 00:17:50.458 "data_offset": 2048, 00:17:50.458 "data_size": 63488 00:17:50.458 }, 00:17:50.458 { 00:17:50.458 "name": "BaseBdev3", 00:17:50.458 "uuid": "bcedbe94-e3fe-5111-b2d9-0757fa33e4f0", 00:17:50.458 "is_configured": true, 00:17:50.458 "data_offset": 2048, 00:17:50.458 "data_size": 63488 00:17:50.458 } 00:17:50.458 ] 00:17:50.458 }' 00:17:50.458 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:50.458 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:50.458 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:50.458 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:50.458 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:17:50.458 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:17:50.458 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:17:50.458 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=3 00:17:50.458 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:17:50.458 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=614 00:17:50.458 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:50.458 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:50.458 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:50.458 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:50.458 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:50.458 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:50.458 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:50.458 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.458 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.458 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:50.458 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.458 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:50.458 "name": "raid_bdev1", 00:17:50.458 "uuid": "a72b4974-a22e-45b6-80a3-ba73721438fb", 00:17:50.458 "strip_size_kb": 64, 00:17:50.458 "state": "online", 00:17:50.458 "raid_level": "raid5f", 00:17:50.458 "superblock": true, 00:17:50.458 "num_base_bdevs": 3, 00:17:50.458 "num_base_bdevs_discovered": 3, 00:17:50.458 "num_base_bdevs_operational": 3, 00:17:50.458 "process": { 00:17:50.458 "type": "rebuild", 00:17:50.458 "target": "spare", 00:17:50.458 "progress": { 00:17:50.458 "blocks": 22528, 00:17:50.458 "percent": 17 00:17:50.458 } 00:17:50.458 }, 00:17:50.458 "base_bdevs_list": [ 00:17:50.458 { 00:17:50.458 "name": "spare", 00:17:50.458 "uuid": "d8ff7d5e-3379-5e11-afe9-1a376f547e1f", 00:17:50.458 "is_configured": true, 00:17:50.458 "data_offset": 2048, 00:17:50.458 "data_size": 63488 00:17:50.458 }, 00:17:50.458 { 00:17:50.458 "name": "BaseBdev2", 00:17:50.458 "uuid": "07071152-ff86-5023-9d97-d0cb5fd25123", 00:17:50.458 "is_configured": true, 00:17:50.458 "data_offset": 2048, 00:17:50.458 "data_size": 63488 00:17:50.458 }, 00:17:50.458 { 00:17:50.458 "name": "BaseBdev3", 00:17:50.458 "uuid": "bcedbe94-e3fe-5111-b2d9-0757fa33e4f0", 00:17:50.458 "is_configured": true, 00:17:50.458 "data_offset": 2048, 00:17:50.458 "data_size": 63488 00:17:50.458 } 00:17:50.458 ] 00:17:50.458 }' 00:17:50.459 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:50.459 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:50.459 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:50.459 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:50.459 14:17:27 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:51.394 14:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:51.394 14:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:51.394 14:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:51.394 14:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:51.394 14:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:51.394 14:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:51.395 14:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:51.395 14:17:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:51.395 14:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:51.395 14:17:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:51.395 14:17:28 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:51.653 14:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:51.653 "name": "raid_bdev1", 00:17:51.653 "uuid": "a72b4974-a22e-45b6-80a3-ba73721438fb", 00:17:51.653 "strip_size_kb": 64, 00:17:51.653 "state": "online", 00:17:51.653 "raid_level": "raid5f", 00:17:51.653 "superblock": true, 00:17:51.653 "num_base_bdevs": 3, 00:17:51.653 "num_base_bdevs_discovered": 3, 00:17:51.653 "num_base_bdevs_operational": 3, 00:17:51.653 "process": { 00:17:51.653 "type": "rebuild", 00:17:51.653 "target": "spare", 00:17:51.653 "progress": { 00:17:51.653 "blocks": 47104, 00:17:51.653 "percent": 37 00:17:51.653 } 00:17:51.653 }, 00:17:51.653 "base_bdevs_list": [ 00:17:51.653 { 00:17:51.653 "name": "spare", 00:17:51.653 "uuid": "d8ff7d5e-3379-5e11-afe9-1a376f547e1f", 00:17:51.653 "is_configured": true, 00:17:51.653 "data_offset": 2048, 00:17:51.653 "data_size": 63488 00:17:51.653 }, 00:17:51.653 { 00:17:51.653 "name": "BaseBdev2", 00:17:51.653 "uuid": "07071152-ff86-5023-9d97-d0cb5fd25123", 00:17:51.653 "is_configured": true, 00:17:51.653 "data_offset": 2048, 00:17:51.653 "data_size": 63488 00:17:51.653 }, 00:17:51.653 { 00:17:51.653 "name": "BaseBdev3", 00:17:51.653 "uuid": "bcedbe94-e3fe-5111-b2d9-0757fa33e4f0", 00:17:51.653 "is_configured": true, 00:17:51.653 "data_offset": 2048, 00:17:51.653 "data_size": 63488 00:17:51.653 } 00:17:51.653 ] 00:17:51.653 }' 00:17:51.653 14:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:51.653 14:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:51.653 14:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:51.653 14:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:51.653 14:17:28 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:52.589 14:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:52.589 14:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:52.589 14:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:52.589 14:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:52.589 14:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:52.589 14:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:52.589 14:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:52.589 14:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.589 14:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:52.589 14:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:52.589 14:17:29 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.589 14:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:52.589 "name": "raid_bdev1", 00:17:52.589 "uuid": "a72b4974-a22e-45b6-80a3-ba73721438fb", 00:17:52.589 "strip_size_kb": 64, 00:17:52.589 "state": "online", 00:17:52.589 "raid_level": "raid5f", 00:17:52.589 "superblock": true, 00:17:52.589 "num_base_bdevs": 3, 00:17:52.589 "num_base_bdevs_discovered": 3, 00:17:52.589 "num_base_bdevs_operational": 3, 00:17:52.589 "process": { 00:17:52.589 "type": "rebuild", 00:17:52.589 "target": "spare", 00:17:52.589 "progress": { 00:17:52.589 "blocks": 69632, 00:17:52.589 "percent": 54 00:17:52.589 } 00:17:52.589 }, 00:17:52.589 "base_bdevs_list": [ 00:17:52.589 { 00:17:52.589 "name": "spare", 00:17:52.589 "uuid": "d8ff7d5e-3379-5e11-afe9-1a376f547e1f", 00:17:52.589 "is_configured": true, 00:17:52.589 "data_offset": 2048, 00:17:52.589 "data_size": 63488 00:17:52.589 }, 00:17:52.589 { 00:17:52.589 "name": "BaseBdev2", 00:17:52.589 "uuid": "07071152-ff86-5023-9d97-d0cb5fd25123", 00:17:52.589 "is_configured": true, 00:17:52.589 "data_offset": 2048, 00:17:52.589 "data_size": 63488 00:17:52.589 }, 00:17:52.589 { 00:17:52.589 "name": "BaseBdev3", 00:17:52.589 "uuid": "bcedbe94-e3fe-5111-b2d9-0757fa33e4f0", 00:17:52.589 "is_configured": true, 00:17:52.589 "data_offset": 2048, 00:17:52.589 "data_size": 63488 00:17:52.589 } 00:17:52.589 ] 00:17:52.589 }' 00:17:52.589 14:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:52.848 14:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:52.848 14:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:52.848 14:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:52.848 14:17:29 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:53.787 14:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:53.787 14:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:53.787 14:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:53.787 14:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:53.787 14:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:53.787 14:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:53.787 14:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:53.787 14:17:30 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:53.787 14:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.787 14:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:53.787 14:17:30 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.787 14:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:53.787 "name": "raid_bdev1", 00:17:53.787 "uuid": "a72b4974-a22e-45b6-80a3-ba73721438fb", 00:17:53.787 "strip_size_kb": 64, 00:17:53.787 "state": "online", 00:17:53.787 "raid_level": "raid5f", 00:17:53.787 "superblock": true, 00:17:53.787 "num_base_bdevs": 3, 00:17:53.787 "num_base_bdevs_discovered": 3, 00:17:53.787 "num_base_bdevs_operational": 3, 00:17:53.787 "process": { 00:17:53.787 "type": "rebuild", 00:17:53.787 "target": "spare", 00:17:53.787 "progress": { 00:17:53.787 "blocks": 94208, 00:17:53.787 "percent": 74 00:17:53.787 } 00:17:53.787 }, 00:17:53.787 "base_bdevs_list": [ 00:17:53.787 { 00:17:53.787 "name": "spare", 00:17:53.787 "uuid": "d8ff7d5e-3379-5e11-afe9-1a376f547e1f", 00:17:53.787 "is_configured": true, 00:17:53.787 "data_offset": 2048, 00:17:53.787 "data_size": 63488 00:17:53.787 }, 00:17:53.787 { 00:17:53.787 "name": "BaseBdev2", 00:17:53.787 "uuid": "07071152-ff86-5023-9d97-d0cb5fd25123", 00:17:53.787 "is_configured": true, 00:17:53.787 "data_offset": 2048, 00:17:53.787 "data_size": 63488 00:17:53.787 }, 00:17:53.787 { 00:17:53.787 "name": "BaseBdev3", 00:17:53.787 "uuid": "bcedbe94-e3fe-5111-b2d9-0757fa33e4f0", 00:17:53.787 "is_configured": true, 00:17:53.787 "data_offset": 2048, 00:17:53.787 "data_size": 63488 00:17:53.787 } 00:17:53.787 ] 00:17:53.787 }' 00:17:53.787 14:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:54.047 14:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:54.047 14:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:54.047 14:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:54.047 14:17:31 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:55.004 14:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:55.004 14:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:55.004 14:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:55.004 14:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:55.004 14:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:55.004 14:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:55.004 14:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:55.004 14:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.004 14:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.004 14:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:55.004 14:17:32 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.004 14:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:55.004 "name": "raid_bdev1", 00:17:55.004 "uuid": "a72b4974-a22e-45b6-80a3-ba73721438fb", 00:17:55.004 "strip_size_kb": 64, 00:17:55.004 "state": "online", 00:17:55.004 "raid_level": "raid5f", 00:17:55.004 "superblock": true, 00:17:55.004 "num_base_bdevs": 3, 00:17:55.004 "num_base_bdevs_discovered": 3, 00:17:55.004 "num_base_bdevs_operational": 3, 00:17:55.004 "process": { 00:17:55.004 "type": "rebuild", 00:17:55.004 "target": "spare", 00:17:55.004 "progress": { 00:17:55.004 "blocks": 116736, 00:17:55.004 "percent": 91 00:17:55.004 } 00:17:55.004 }, 00:17:55.004 "base_bdevs_list": [ 00:17:55.004 { 00:17:55.004 "name": "spare", 00:17:55.004 "uuid": "d8ff7d5e-3379-5e11-afe9-1a376f547e1f", 00:17:55.004 "is_configured": true, 00:17:55.004 "data_offset": 2048, 00:17:55.004 "data_size": 63488 00:17:55.004 }, 00:17:55.004 { 00:17:55.004 "name": "BaseBdev2", 00:17:55.004 "uuid": "07071152-ff86-5023-9d97-d0cb5fd25123", 00:17:55.004 "is_configured": true, 00:17:55.004 "data_offset": 2048, 00:17:55.004 "data_size": 63488 00:17:55.004 }, 00:17:55.004 { 00:17:55.004 "name": "BaseBdev3", 00:17:55.004 "uuid": "bcedbe94-e3fe-5111-b2d9-0757fa33e4f0", 00:17:55.004 "is_configured": true, 00:17:55.004 "data_offset": 2048, 00:17:55.004 "data_size": 63488 00:17:55.004 } 00:17:55.004 ] 00:17:55.004 }' 00:17:55.004 14:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:55.004 14:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:55.004 14:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:55.263 14:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:17:55.263 14:17:32 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:17:55.522 [2024-11-27 14:17:32.585863] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:17:55.522 [2024-11-27 14:17:32.585968] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:17:55.522 [2024-11-27 14:17:32.586146] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:56.089 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:17:56.089 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:56.089 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.089 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:17:56.089 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:17:56.089 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.089 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.089 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.089 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.089 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.089 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.089 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.089 "name": "raid_bdev1", 00:17:56.089 "uuid": "a72b4974-a22e-45b6-80a3-ba73721438fb", 00:17:56.089 "strip_size_kb": 64, 00:17:56.089 "state": "online", 00:17:56.089 "raid_level": "raid5f", 00:17:56.090 "superblock": true, 00:17:56.090 "num_base_bdevs": 3, 00:17:56.090 "num_base_bdevs_discovered": 3, 00:17:56.090 "num_base_bdevs_operational": 3, 00:17:56.090 "base_bdevs_list": [ 00:17:56.090 { 00:17:56.090 "name": "spare", 00:17:56.090 "uuid": "d8ff7d5e-3379-5e11-afe9-1a376f547e1f", 00:17:56.090 "is_configured": true, 00:17:56.090 "data_offset": 2048, 00:17:56.090 "data_size": 63488 00:17:56.090 }, 00:17:56.090 { 00:17:56.090 "name": "BaseBdev2", 00:17:56.090 "uuid": "07071152-ff86-5023-9d97-d0cb5fd25123", 00:17:56.090 "is_configured": true, 00:17:56.090 "data_offset": 2048, 00:17:56.090 "data_size": 63488 00:17:56.090 }, 00:17:56.090 { 00:17:56.090 "name": "BaseBdev3", 00:17:56.090 "uuid": "bcedbe94-e3fe-5111-b2d9-0757fa33e4f0", 00:17:56.090 "is_configured": true, 00:17:56.090 "data_offset": 2048, 00:17:56.090 "data_size": 63488 00:17:56.090 } 00:17:56.090 ] 00:17:56.090 }' 00:17:56.090 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.349 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:17:56.349 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.349 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:17:56.350 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:17:56.350 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:56.350 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:56.350 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:56.350 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:56.350 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:56.350 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.350 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.350 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.350 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.350 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.350 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:56.350 "name": "raid_bdev1", 00:17:56.350 "uuid": "a72b4974-a22e-45b6-80a3-ba73721438fb", 00:17:56.350 "strip_size_kb": 64, 00:17:56.350 "state": "online", 00:17:56.350 "raid_level": "raid5f", 00:17:56.350 "superblock": true, 00:17:56.350 "num_base_bdevs": 3, 00:17:56.350 "num_base_bdevs_discovered": 3, 00:17:56.350 "num_base_bdevs_operational": 3, 00:17:56.350 "base_bdevs_list": [ 00:17:56.350 { 00:17:56.350 "name": "spare", 00:17:56.350 "uuid": "d8ff7d5e-3379-5e11-afe9-1a376f547e1f", 00:17:56.350 "is_configured": true, 00:17:56.350 "data_offset": 2048, 00:17:56.350 "data_size": 63488 00:17:56.350 }, 00:17:56.350 { 00:17:56.350 "name": "BaseBdev2", 00:17:56.350 "uuid": "07071152-ff86-5023-9d97-d0cb5fd25123", 00:17:56.350 "is_configured": true, 00:17:56.350 "data_offset": 2048, 00:17:56.350 "data_size": 63488 00:17:56.350 }, 00:17:56.350 { 00:17:56.350 "name": "BaseBdev3", 00:17:56.350 "uuid": "bcedbe94-e3fe-5111-b2d9-0757fa33e4f0", 00:17:56.350 "is_configured": true, 00:17:56.350 "data_offset": 2048, 00:17:56.350 "data_size": 63488 00:17:56.350 } 00:17:56.350 ] 00:17:56.350 }' 00:17:56.350 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:56.350 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:56.350 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:56.350 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:56.350 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:56.350 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:56.350 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:56.350 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:56.350 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:56.350 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:56.350 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:56.350 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:56.350 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:56.350 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:56.350 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:56.350 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.350 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.350 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.608 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:56.608 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:56.608 "name": "raid_bdev1", 00:17:56.608 "uuid": "a72b4974-a22e-45b6-80a3-ba73721438fb", 00:17:56.608 "strip_size_kb": 64, 00:17:56.608 "state": "online", 00:17:56.608 "raid_level": "raid5f", 00:17:56.608 "superblock": true, 00:17:56.608 "num_base_bdevs": 3, 00:17:56.608 "num_base_bdevs_discovered": 3, 00:17:56.608 "num_base_bdevs_operational": 3, 00:17:56.608 "base_bdevs_list": [ 00:17:56.608 { 00:17:56.608 "name": "spare", 00:17:56.608 "uuid": "d8ff7d5e-3379-5e11-afe9-1a376f547e1f", 00:17:56.608 "is_configured": true, 00:17:56.608 "data_offset": 2048, 00:17:56.608 "data_size": 63488 00:17:56.608 }, 00:17:56.608 { 00:17:56.608 "name": "BaseBdev2", 00:17:56.608 "uuid": "07071152-ff86-5023-9d97-d0cb5fd25123", 00:17:56.608 "is_configured": true, 00:17:56.608 "data_offset": 2048, 00:17:56.608 "data_size": 63488 00:17:56.608 }, 00:17:56.608 { 00:17:56.608 "name": "BaseBdev3", 00:17:56.608 "uuid": "bcedbe94-e3fe-5111-b2d9-0757fa33e4f0", 00:17:56.608 "is_configured": true, 00:17:56.608 "data_offset": 2048, 00:17:56.608 "data_size": 63488 00:17:56.608 } 00:17:56.608 ] 00:17:56.608 }' 00:17:56.608 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:56.608 14:17:33 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.867 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:17:56.867 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:56.867 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:56.867 [2024-11-27 14:17:34.139137] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:56.867 [2024-11-27 14:17:34.139175] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:56.867 [2024-11-27 14:17:34.139281] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:56.867 [2024-11-27 14:17:34.139386] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:56.867 [2024-11-27 14:17:34.139412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:17:56.867 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.126 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:17:57.126 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:57.126 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.126 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:57.126 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.126 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:17:57.126 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:17:57.126 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:17:57.126 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:17:57.126 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:17:57.126 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:17:57.126 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:17:57.126 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:57.126 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:17:57.126 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:17:57.126 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:17:57.126 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:57.126 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:17:57.385 /dev/nbd0 00:17:57.385 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:17:57.385 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:17:57.385 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:17:57.385 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:57.385 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:57.385 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:57.385 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:17:57.385 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:57.385 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:57.385 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:57.385 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:57.385 1+0 records in 00:17:57.385 1+0 records out 00:17:57.385 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000347369 s, 11.8 MB/s 00:17:57.385 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:57.385 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:57.385 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:57.385 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:57.385 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:57.385 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:57.385 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:57.385 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:17:57.644 /dev/nbd1 00:17:57.644 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:17:57.644 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:17:57.644 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:17:57.644 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:17:57.644 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:17:57.644 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:17:57.644 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:17:57.644 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:17:57.644 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:17:57.644 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:17:57.644 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:17:57.644 1+0 records in 00:17:57.644 1+0 records out 00:17:57.644 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000363654 s, 11.3 MB/s 00:17:57.644 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:57.644 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:17:57.644 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:17:57.644 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:17:57.644 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:17:57.644 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:17:57.644 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:17:57.644 14:17:34 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:17:57.903 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:17:57.903 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:17:57.903 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:17:57.903 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:17:57.903 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:17:57.903 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:57.903 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:17:58.176 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:17:58.176 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:17:58.176 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:17:58.176 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:58.177 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:58.177 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:17:58.177 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:58.177 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:58.177 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:17:58.177 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:17:58.745 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:17:58.745 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:17:58.745 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:17:58.745 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:17:58.745 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:17:58.745 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:17:58.745 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:17:58.745 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:17:58.745 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:17:58.745 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:17:58.745 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.745 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.745 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.745 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:17:58.745 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.745 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.745 [2024-11-27 14:17:35.750080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:58.745 [2024-11-27 14:17:35.750199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:58.745 [2024-11-27 14:17:35.750233] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:17:58.745 [2024-11-27 14:17:35.750251] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:58.745 [2024-11-27 14:17:35.753519] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:58.746 [2024-11-27 14:17:35.753580] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:58.746 [2024-11-27 14:17:35.753699] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:17:58.746 [2024-11-27 14:17:35.753767] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:58.746 [2024-11-27 14:17:35.753996] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:58.746 [2024-11-27 14:17:35.754156] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:17:58.746 spare 00:17:58.746 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.746 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:17:58.746 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.746 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.746 [2024-11-27 14:17:35.854357] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:17:58.746 [2024-11-27 14:17:35.854461] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:17:58.746 [2024-11-27 14:17:35.854962] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000047700 00:17:58.746 [2024-11-27 14:17:35.859837] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:17:58.746 [2024-11-27 14:17:35.859893] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:17:58.746 [2024-11-27 14:17:35.860222] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.746 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.746 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:17:58.746 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:58.746 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:58.746 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:58.746 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:58.746 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:17:58.746 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:58.746 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:58.746 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:58.746 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:58.746 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:58.746 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.746 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.746 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:58.746 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.746 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:58.746 "name": "raid_bdev1", 00:17:58.746 "uuid": "a72b4974-a22e-45b6-80a3-ba73721438fb", 00:17:58.746 "strip_size_kb": 64, 00:17:58.746 "state": "online", 00:17:58.746 "raid_level": "raid5f", 00:17:58.746 "superblock": true, 00:17:58.746 "num_base_bdevs": 3, 00:17:58.746 "num_base_bdevs_discovered": 3, 00:17:58.746 "num_base_bdevs_operational": 3, 00:17:58.746 "base_bdevs_list": [ 00:17:58.746 { 00:17:58.746 "name": "spare", 00:17:58.746 "uuid": "d8ff7d5e-3379-5e11-afe9-1a376f547e1f", 00:17:58.746 "is_configured": true, 00:17:58.746 "data_offset": 2048, 00:17:58.746 "data_size": 63488 00:17:58.746 }, 00:17:58.746 { 00:17:58.746 "name": "BaseBdev2", 00:17:58.746 "uuid": "07071152-ff86-5023-9d97-d0cb5fd25123", 00:17:58.746 "is_configured": true, 00:17:58.746 "data_offset": 2048, 00:17:58.746 "data_size": 63488 00:17:58.746 }, 00:17:58.746 { 00:17:58.746 "name": "BaseBdev3", 00:17:58.746 "uuid": "bcedbe94-e3fe-5111-b2d9-0757fa33e4f0", 00:17:58.746 "is_configured": true, 00:17:58.746 "data_offset": 2048, 00:17:58.746 "data_size": 63488 00:17:58.746 } 00:17:58.746 ] 00:17:58.746 }' 00:17:58.746 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:58.746 14:17:35 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.313 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:59.313 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:17:59.313 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:17:59.313 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:17:59.313 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:17:59.313 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.313 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.313 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.313 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.313 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.313 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:17:59.313 "name": "raid_bdev1", 00:17:59.313 "uuid": "a72b4974-a22e-45b6-80a3-ba73721438fb", 00:17:59.313 "strip_size_kb": 64, 00:17:59.313 "state": "online", 00:17:59.313 "raid_level": "raid5f", 00:17:59.313 "superblock": true, 00:17:59.313 "num_base_bdevs": 3, 00:17:59.313 "num_base_bdevs_discovered": 3, 00:17:59.313 "num_base_bdevs_operational": 3, 00:17:59.313 "base_bdevs_list": [ 00:17:59.313 { 00:17:59.313 "name": "spare", 00:17:59.313 "uuid": "d8ff7d5e-3379-5e11-afe9-1a376f547e1f", 00:17:59.313 "is_configured": true, 00:17:59.313 "data_offset": 2048, 00:17:59.313 "data_size": 63488 00:17:59.313 }, 00:17:59.313 { 00:17:59.313 "name": "BaseBdev2", 00:17:59.313 "uuid": "07071152-ff86-5023-9d97-d0cb5fd25123", 00:17:59.313 "is_configured": true, 00:17:59.313 "data_offset": 2048, 00:17:59.313 "data_size": 63488 00:17:59.313 }, 00:17:59.313 { 00:17:59.313 "name": "BaseBdev3", 00:17:59.313 "uuid": "bcedbe94-e3fe-5111-b2d9-0757fa33e4f0", 00:17:59.313 "is_configured": true, 00:17:59.313 "data_offset": 2048, 00:17:59.313 "data_size": 63488 00:17:59.313 } 00:17:59.313 ] 00:17:59.313 }' 00:17:59.313 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:17:59.313 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:17:59.313 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:17:59.313 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:17:59.313 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.313 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.313 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.313 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:17:59.313 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.572 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:17:59.572 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:17:59.572 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.572 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.572 [2024-11-27 14:17:36.618204] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:59.572 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.572 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:17:59.572 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:17:59.572 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:17:59.572 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:17:59.572 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:17:59.572 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:17:59.572 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:17:59.572 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:17:59.572 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:17:59.573 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:17:59.573 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:17:59.573 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.573 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.573 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:17:59.573 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.573 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:17:59.573 "name": "raid_bdev1", 00:17:59.573 "uuid": "a72b4974-a22e-45b6-80a3-ba73721438fb", 00:17:59.573 "strip_size_kb": 64, 00:17:59.573 "state": "online", 00:17:59.573 "raid_level": "raid5f", 00:17:59.573 "superblock": true, 00:17:59.573 "num_base_bdevs": 3, 00:17:59.573 "num_base_bdevs_discovered": 2, 00:17:59.573 "num_base_bdevs_operational": 2, 00:17:59.573 "base_bdevs_list": [ 00:17:59.573 { 00:17:59.573 "name": null, 00:17:59.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.573 "is_configured": false, 00:17:59.573 "data_offset": 0, 00:17:59.573 "data_size": 63488 00:17:59.573 }, 00:17:59.573 { 00:17:59.573 "name": "BaseBdev2", 00:17:59.573 "uuid": "07071152-ff86-5023-9d97-d0cb5fd25123", 00:17:59.573 "is_configured": true, 00:17:59.573 "data_offset": 2048, 00:17:59.573 "data_size": 63488 00:17:59.573 }, 00:17:59.573 { 00:17:59.573 "name": "BaseBdev3", 00:17:59.573 "uuid": "bcedbe94-e3fe-5111-b2d9-0757fa33e4f0", 00:17:59.573 "is_configured": true, 00:17:59.573 "data_offset": 2048, 00:17:59.573 "data_size": 63488 00:17:59.573 } 00:17:59.573 ] 00:17:59.573 }' 00:17:59.573 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:17:59.573 14:17:36 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.140 14:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:00.140 14:17:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.140 14:17:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:00.140 [2024-11-27 14:17:37.166415] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:00.140 [2024-11-27 14:17:37.166734] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:00.140 [2024-11-27 14:17:37.166823] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:00.140 [2024-11-27 14:17:37.166877] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:00.141 [2024-11-27 14:17:37.181744] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000477d0 00:18:00.141 14:17:37 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.141 14:17:37 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:18:00.141 [2024-11-27 14:17:37.188881] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:01.078 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:01.078 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:01.078 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:01.078 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:01.078 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:01.078 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.078 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.078 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.078 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.078 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.078 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:01.078 "name": "raid_bdev1", 00:18:01.078 "uuid": "a72b4974-a22e-45b6-80a3-ba73721438fb", 00:18:01.078 "strip_size_kb": 64, 00:18:01.078 "state": "online", 00:18:01.078 "raid_level": "raid5f", 00:18:01.078 "superblock": true, 00:18:01.078 "num_base_bdevs": 3, 00:18:01.078 "num_base_bdevs_discovered": 3, 00:18:01.078 "num_base_bdevs_operational": 3, 00:18:01.078 "process": { 00:18:01.078 "type": "rebuild", 00:18:01.078 "target": "spare", 00:18:01.078 "progress": { 00:18:01.078 "blocks": 18432, 00:18:01.078 "percent": 14 00:18:01.078 } 00:18:01.078 }, 00:18:01.078 "base_bdevs_list": [ 00:18:01.078 { 00:18:01.078 "name": "spare", 00:18:01.078 "uuid": "d8ff7d5e-3379-5e11-afe9-1a376f547e1f", 00:18:01.078 "is_configured": true, 00:18:01.078 "data_offset": 2048, 00:18:01.078 "data_size": 63488 00:18:01.078 }, 00:18:01.078 { 00:18:01.078 "name": "BaseBdev2", 00:18:01.078 "uuid": "07071152-ff86-5023-9d97-d0cb5fd25123", 00:18:01.078 "is_configured": true, 00:18:01.078 "data_offset": 2048, 00:18:01.078 "data_size": 63488 00:18:01.078 }, 00:18:01.078 { 00:18:01.078 "name": "BaseBdev3", 00:18:01.078 "uuid": "bcedbe94-e3fe-5111-b2d9-0757fa33e4f0", 00:18:01.078 "is_configured": true, 00:18:01.078 "data_offset": 2048, 00:18:01.078 "data_size": 63488 00:18:01.078 } 00:18:01.078 ] 00:18:01.078 }' 00:18:01.078 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:01.078 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:01.078 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:01.078 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:01.078 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:18:01.078 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.337 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.337 [2024-11-27 14:17:38.359081] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:01.337 [2024-11-27 14:17:38.404512] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:01.337 [2024-11-27 14:17:38.404679] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:01.337 [2024-11-27 14:17:38.404706] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:01.337 [2024-11-27 14:17:38.404721] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:01.337 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.337 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:01.337 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:01.337 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:01.337 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:01.337 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:01.337 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:01.337 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:01.337 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:01.337 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:01.337 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:01.337 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:01.337 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:01.337 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.338 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.338 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.338 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:01.338 "name": "raid_bdev1", 00:18:01.338 "uuid": "a72b4974-a22e-45b6-80a3-ba73721438fb", 00:18:01.338 "strip_size_kb": 64, 00:18:01.338 "state": "online", 00:18:01.338 "raid_level": "raid5f", 00:18:01.338 "superblock": true, 00:18:01.338 "num_base_bdevs": 3, 00:18:01.338 "num_base_bdevs_discovered": 2, 00:18:01.338 "num_base_bdevs_operational": 2, 00:18:01.338 "base_bdevs_list": [ 00:18:01.338 { 00:18:01.338 "name": null, 00:18:01.338 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:01.338 "is_configured": false, 00:18:01.338 "data_offset": 0, 00:18:01.338 "data_size": 63488 00:18:01.338 }, 00:18:01.338 { 00:18:01.338 "name": "BaseBdev2", 00:18:01.338 "uuid": "07071152-ff86-5023-9d97-d0cb5fd25123", 00:18:01.338 "is_configured": true, 00:18:01.338 "data_offset": 2048, 00:18:01.338 "data_size": 63488 00:18:01.338 }, 00:18:01.338 { 00:18:01.338 "name": "BaseBdev3", 00:18:01.338 "uuid": "bcedbe94-e3fe-5111-b2d9-0757fa33e4f0", 00:18:01.338 "is_configured": true, 00:18:01.338 "data_offset": 2048, 00:18:01.338 "data_size": 63488 00:18:01.338 } 00:18:01.338 ] 00:18:01.338 }' 00:18:01.338 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:01.338 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.904 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:01.904 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.905 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:01.905 [2024-11-27 14:17:38.976796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:01.905 [2024-11-27 14:17:38.976923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:01.905 [2024-11-27 14:17:38.976955] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b780 00:18:01.905 [2024-11-27 14:17:38.976976] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:01.905 [2024-11-27 14:17:38.977649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:01.905 [2024-11-27 14:17:38.977697] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:01.905 [2024-11-27 14:17:38.977867] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:01.905 [2024-11-27 14:17:38.977897] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:01.905 [2024-11-27 14:17:38.977917] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:01.905 [2024-11-27 14:17:38.977954] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:01.905 [2024-11-27 14:17:38.993070] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000478a0 00:18:01.905 spare 00:18:01.905 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.905 14:17:38 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:18:01.905 [2024-11-27 14:17:39.000388] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:02.839 14:17:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:02.839 14:17:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:02.839 14:17:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:02.839 14:17:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:02.839 14:17:39 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:02.839 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:02.839 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.839 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.839 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:02.839 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.839 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:02.839 "name": "raid_bdev1", 00:18:02.839 "uuid": "a72b4974-a22e-45b6-80a3-ba73721438fb", 00:18:02.839 "strip_size_kb": 64, 00:18:02.839 "state": "online", 00:18:02.839 "raid_level": "raid5f", 00:18:02.839 "superblock": true, 00:18:02.839 "num_base_bdevs": 3, 00:18:02.839 "num_base_bdevs_discovered": 3, 00:18:02.839 "num_base_bdevs_operational": 3, 00:18:02.839 "process": { 00:18:02.839 "type": "rebuild", 00:18:02.839 "target": "spare", 00:18:02.839 "progress": { 00:18:02.839 "blocks": 18432, 00:18:02.839 "percent": 14 00:18:02.839 } 00:18:02.839 }, 00:18:02.839 "base_bdevs_list": [ 00:18:02.839 { 00:18:02.839 "name": "spare", 00:18:02.839 "uuid": "d8ff7d5e-3379-5e11-afe9-1a376f547e1f", 00:18:02.839 "is_configured": true, 00:18:02.839 "data_offset": 2048, 00:18:02.839 "data_size": 63488 00:18:02.839 }, 00:18:02.839 { 00:18:02.839 "name": "BaseBdev2", 00:18:02.839 "uuid": "07071152-ff86-5023-9d97-d0cb5fd25123", 00:18:02.839 "is_configured": true, 00:18:02.839 "data_offset": 2048, 00:18:02.839 "data_size": 63488 00:18:02.839 }, 00:18:02.839 { 00:18:02.839 "name": "BaseBdev3", 00:18:02.839 "uuid": "bcedbe94-e3fe-5111-b2d9-0757fa33e4f0", 00:18:02.839 "is_configured": true, 00:18:02.839 "data_offset": 2048, 00:18:02.839 "data_size": 63488 00:18:02.839 } 00:18:02.839 ] 00:18:02.839 }' 00:18:02.839 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:02.839 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:02.839 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.102 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:03.102 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:18:03.102 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.102 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.102 [2024-11-27 14:17:40.166625] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:03.102 [2024-11-27 14:17:40.216384] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:03.102 [2024-11-27 14:17:40.216505] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.102 [2024-11-27 14:17:40.216549] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:03.102 [2024-11-27 14:17:40.216560] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:03.102 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.102 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:03.102 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:03.102 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:03.102 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:03.102 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:03.102 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:03.102 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:03.102 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:03.102 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:03.102 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:03.103 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.103 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.103 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.103 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.103 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.103 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:03.103 "name": "raid_bdev1", 00:18:03.103 "uuid": "a72b4974-a22e-45b6-80a3-ba73721438fb", 00:18:03.103 "strip_size_kb": 64, 00:18:03.103 "state": "online", 00:18:03.103 "raid_level": "raid5f", 00:18:03.103 "superblock": true, 00:18:03.103 "num_base_bdevs": 3, 00:18:03.103 "num_base_bdevs_discovered": 2, 00:18:03.103 "num_base_bdevs_operational": 2, 00:18:03.103 "base_bdevs_list": [ 00:18:03.103 { 00:18:03.103 "name": null, 00:18:03.103 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.103 "is_configured": false, 00:18:03.103 "data_offset": 0, 00:18:03.103 "data_size": 63488 00:18:03.103 }, 00:18:03.103 { 00:18:03.103 "name": "BaseBdev2", 00:18:03.103 "uuid": "07071152-ff86-5023-9d97-d0cb5fd25123", 00:18:03.103 "is_configured": true, 00:18:03.103 "data_offset": 2048, 00:18:03.103 "data_size": 63488 00:18:03.103 }, 00:18:03.103 { 00:18:03.103 "name": "BaseBdev3", 00:18:03.103 "uuid": "bcedbe94-e3fe-5111-b2d9-0757fa33e4f0", 00:18:03.103 "is_configured": true, 00:18:03.103 "data_offset": 2048, 00:18:03.103 "data_size": 63488 00:18:03.103 } 00:18:03.103 ] 00:18:03.103 }' 00:18:03.103 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:03.103 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.669 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:03.669 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:03.669 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:03.669 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:03.669 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:03.669 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.669 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:03.669 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.669 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.669 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.669 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:03.669 "name": "raid_bdev1", 00:18:03.669 "uuid": "a72b4974-a22e-45b6-80a3-ba73721438fb", 00:18:03.669 "strip_size_kb": 64, 00:18:03.669 "state": "online", 00:18:03.669 "raid_level": "raid5f", 00:18:03.669 "superblock": true, 00:18:03.669 "num_base_bdevs": 3, 00:18:03.669 "num_base_bdevs_discovered": 2, 00:18:03.669 "num_base_bdevs_operational": 2, 00:18:03.669 "base_bdevs_list": [ 00:18:03.669 { 00:18:03.669 "name": null, 00:18:03.669 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:03.669 "is_configured": false, 00:18:03.669 "data_offset": 0, 00:18:03.669 "data_size": 63488 00:18:03.669 }, 00:18:03.669 { 00:18:03.669 "name": "BaseBdev2", 00:18:03.669 "uuid": "07071152-ff86-5023-9d97-d0cb5fd25123", 00:18:03.669 "is_configured": true, 00:18:03.669 "data_offset": 2048, 00:18:03.669 "data_size": 63488 00:18:03.669 }, 00:18:03.669 { 00:18:03.669 "name": "BaseBdev3", 00:18:03.669 "uuid": "bcedbe94-e3fe-5111-b2d9-0757fa33e4f0", 00:18:03.669 "is_configured": true, 00:18:03.669 "data_offset": 2048, 00:18:03.669 "data_size": 63488 00:18:03.669 } 00:18:03.669 ] 00:18:03.669 }' 00:18:03.669 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:03.669 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:03.669 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:03.669 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:03.669 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:18:03.669 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.669 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.929 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.929 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:03.929 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:03.929 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:03.929 [2024-11-27 14:17:40.952221] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:03.929 [2024-11-27 14:17:40.952321] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:03.929 [2024-11-27 14:17:40.952358] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:03.929 [2024-11-27 14:17:40.952389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:03.929 [2024-11-27 14:17:40.953015] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:03.929 [2024-11-27 14:17:40.953057] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:03.929 [2024-11-27 14:17:40.953172] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:03.929 [2024-11-27 14:17:40.953225] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:03.929 [2024-11-27 14:17:40.953250] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:03.929 [2024-11-27 14:17:40.953262] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:18:03.929 BaseBdev1 00:18:03.929 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:03.929 14:17:40 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:18:04.865 14:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:04.865 14:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:04.865 14:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:04.865 14:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:04.865 14:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:04.865 14:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:04.865 14:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:04.865 14:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:04.865 14:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:04.865 14:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:04.865 14:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:04.865 14:17:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.865 14:17:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:04.865 14:17:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:04.865 14:17:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:04.865 14:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:04.865 "name": "raid_bdev1", 00:18:04.865 "uuid": "a72b4974-a22e-45b6-80a3-ba73721438fb", 00:18:04.865 "strip_size_kb": 64, 00:18:04.865 "state": "online", 00:18:04.865 "raid_level": "raid5f", 00:18:04.865 "superblock": true, 00:18:04.865 "num_base_bdevs": 3, 00:18:04.865 "num_base_bdevs_discovered": 2, 00:18:04.865 "num_base_bdevs_operational": 2, 00:18:04.865 "base_bdevs_list": [ 00:18:04.865 { 00:18:04.865 "name": null, 00:18:04.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:04.865 "is_configured": false, 00:18:04.865 "data_offset": 0, 00:18:04.865 "data_size": 63488 00:18:04.865 }, 00:18:04.865 { 00:18:04.865 "name": "BaseBdev2", 00:18:04.865 "uuid": "07071152-ff86-5023-9d97-d0cb5fd25123", 00:18:04.865 "is_configured": true, 00:18:04.865 "data_offset": 2048, 00:18:04.865 "data_size": 63488 00:18:04.865 }, 00:18:04.865 { 00:18:04.865 "name": "BaseBdev3", 00:18:04.865 "uuid": "bcedbe94-e3fe-5111-b2d9-0757fa33e4f0", 00:18:04.865 "is_configured": true, 00:18:04.865 "data_offset": 2048, 00:18:04.865 "data_size": 63488 00:18:04.865 } 00:18:04.865 ] 00:18:04.865 }' 00:18:04.865 14:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:04.865 14:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.433 14:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:05.433 14:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:05.434 14:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:05.434 14:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:05.434 14:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:05.434 14:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:05.434 14:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.434 14:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.434 14:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.434 14:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:05.434 14:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:05.434 "name": "raid_bdev1", 00:18:05.434 "uuid": "a72b4974-a22e-45b6-80a3-ba73721438fb", 00:18:05.434 "strip_size_kb": 64, 00:18:05.434 "state": "online", 00:18:05.434 "raid_level": "raid5f", 00:18:05.434 "superblock": true, 00:18:05.434 "num_base_bdevs": 3, 00:18:05.434 "num_base_bdevs_discovered": 2, 00:18:05.434 "num_base_bdevs_operational": 2, 00:18:05.434 "base_bdevs_list": [ 00:18:05.434 { 00:18:05.434 "name": null, 00:18:05.434 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:05.434 "is_configured": false, 00:18:05.434 "data_offset": 0, 00:18:05.434 "data_size": 63488 00:18:05.434 }, 00:18:05.434 { 00:18:05.434 "name": "BaseBdev2", 00:18:05.434 "uuid": "07071152-ff86-5023-9d97-d0cb5fd25123", 00:18:05.434 "is_configured": true, 00:18:05.434 "data_offset": 2048, 00:18:05.434 "data_size": 63488 00:18:05.434 }, 00:18:05.434 { 00:18:05.434 "name": "BaseBdev3", 00:18:05.434 "uuid": "bcedbe94-e3fe-5111-b2d9-0757fa33e4f0", 00:18:05.434 "is_configured": true, 00:18:05.434 "data_offset": 2048, 00:18:05.434 "data_size": 63488 00:18:05.434 } 00:18:05.434 ] 00:18:05.434 }' 00:18:05.434 14:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:05.434 14:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:05.434 14:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:05.434 14:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:05.434 14:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:05.434 14:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:18:05.434 14:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:05.434 14:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:05.434 14:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:05.434 14:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:05.434 14:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:05.434 14:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:05.434 14:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:05.434 14:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:05.434 [2024-11-27 14:17:42.664866] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:05.434 [2024-11-27 14:17:42.665079] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:05.434 [2024-11-27 14:17:42.665114] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:05.434 request: 00:18:05.434 { 00:18:05.434 "base_bdev": "BaseBdev1", 00:18:05.434 "raid_bdev": "raid_bdev1", 00:18:05.434 "method": "bdev_raid_add_base_bdev", 00:18:05.434 "req_id": 1 00:18:05.434 } 00:18:05.434 Got JSON-RPC error response 00:18:05.434 response: 00:18:05.434 { 00:18:05.434 "code": -22, 00:18:05.434 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:05.434 } 00:18:05.434 14:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:05.434 14:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:18:05.434 14:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:05.434 14:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:05.434 14:17:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:05.434 14:17:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:18:06.821 14:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 2 00:18:06.821 14:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:06.821 14:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:06.821 14:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:06.821 14:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:06.821 14:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:18:06.821 14:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:06.821 14:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:06.821 14:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:06.821 14:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:06.821 14:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:06.821 14:17:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.821 14:17:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:06.821 14:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.821 14:17:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.821 14:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:06.821 "name": "raid_bdev1", 00:18:06.821 "uuid": "a72b4974-a22e-45b6-80a3-ba73721438fb", 00:18:06.821 "strip_size_kb": 64, 00:18:06.821 "state": "online", 00:18:06.821 "raid_level": "raid5f", 00:18:06.821 "superblock": true, 00:18:06.821 "num_base_bdevs": 3, 00:18:06.821 "num_base_bdevs_discovered": 2, 00:18:06.821 "num_base_bdevs_operational": 2, 00:18:06.821 "base_bdevs_list": [ 00:18:06.821 { 00:18:06.821 "name": null, 00:18:06.821 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:06.821 "is_configured": false, 00:18:06.821 "data_offset": 0, 00:18:06.821 "data_size": 63488 00:18:06.821 }, 00:18:06.821 { 00:18:06.821 "name": "BaseBdev2", 00:18:06.822 "uuid": "07071152-ff86-5023-9d97-d0cb5fd25123", 00:18:06.822 "is_configured": true, 00:18:06.822 "data_offset": 2048, 00:18:06.822 "data_size": 63488 00:18:06.822 }, 00:18:06.822 { 00:18:06.822 "name": "BaseBdev3", 00:18:06.822 "uuid": "bcedbe94-e3fe-5111-b2d9-0757fa33e4f0", 00:18:06.822 "is_configured": true, 00:18:06.822 "data_offset": 2048, 00:18:06.822 "data_size": 63488 00:18:06.822 } 00:18:06.822 ] 00:18:06.822 }' 00:18:06.822 14:17:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:06.822 14:17:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.081 14:17:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:07.081 14:17:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:07.081 14:17:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:07.081 14:17:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:07.081 14:17:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:07.081 14:17:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:07.081 14:17:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.081 14:17:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:07.081 14:17:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.081 14:17:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.081 14:17:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:07.081 "name": "raid_bdev1", 00:18:07.081 "uuid": "a72b4974-a22e-45b6-80a3-ba73721438fb", 00:18:07.081 "strip_size_kb": 64, 00:18:07.081 "state": "online", 00:18:07.081 "raid_level": "raid5f", 00:18:07.081 "superblock": true, 00:18:07.081 "num_base_bdevs": 3, 00:18:07.081 "num_base_bdevs_discovered": 2, 00:18:07.081 "num_base_bdevs_operational": 2, 00:18:07.081 "base_bdevs_list": [ 00:18:07.081 { 00:18:07.081 "name": null, 00:18:07.081 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.081 "is_configured": false, 00:18:07.081 "data_offset": 0, 00:18:07.081 "data_size": 63488 00:18:07.081 }, 00:18:07.081 { 00:18:07.081 "name": "BaseBdev2", 00:18:07.081 "uuid": "07071152-ff86-5023-9d97-d0cb5fd25123", 00:18:07.081 "is_configured": true, 00:18:07.082 "data_offset": 2048, 00:18:07.082 "data_size": 63488 00:18:07.082 }, 00:18:07.082 { 00:18:07.082 "name": "BaseBdev3", 00:18:07.082 "uuid": "bcedbe94-e3fe-5111-b2d9-0757fa33e4f0", 00:18:07.082 "is_configured": true, 00:18:07.082 "data_offset": 2048, 00:18:07.082 "data_size": 63488 00:18:07.082 } 00:18:07.082 ] 00:18:07.082 }' 00:18:07.082 14:17:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:07.082 14:17:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:07.082 14:17:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:07.082 14:17:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:07.082 14:17:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 82296 00:18:07.082 14:17:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 82296 ']' 00:18:07.082 14:17:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 82296 00:18:07.341 14:17:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:07.341 14:17:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:07.341 14:17:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 82296 00:18:07.341 killing process with pid 82296 00:18:07.341 Received shutdown signal, test time was about 60.000000 seconds 00:18:07.341 00:18:07.341 Latency(us) 00:18:07.341 [2024-11-27T14:17:44.619Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:07.341 [2024-11-27T14:17:44.619Z] =================================================================================================================== 00:18:07.341 [2024-11-27T14:17:44.619Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:07.341 14:17:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:07.341 14:17:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:07.341 14:17:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 82296' 00:18:07.341 14:17:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 82296 00:18:07.341 [2024-11-27 14:17:44.394327] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:07.341 14:17:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 82296 00:18:07.341 [2024-11-27 14:17:44.394463] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:07.341 [2024-11-27 14:17:44.394540] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:07.341 [2024-11-27 14:17:44.394558] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:18:07.600 [2024-11-27 14:17:44.735856] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:08.538 14:17:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:18:08.538 00:18:08.538 real 0m25.003s 00:18:08.538 user 0m33.284s 00:18:08.538 sys 0m2.692s 00:18:08.538 14:17:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:08.538 ************************************ 00:18:08.538 END TEST raid5f_rebuild_test_sb 00:18:08.538 ************************************ 00:18:08.538 14:17:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:08.538 14:17:45 bdev_raid -- bdev/bdev_raid.sh@985 -- # for n in {3..4} 00:18:08.538 14:17:45 bdev_raid -- bdev/bdev_raid.sh@986 -- # run_test raid5f_state_function_test raid_state_function_test raid5f 4 false 00:18:08.538 14:17:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:08.538 14:17:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:08.538 14:17:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:08.538 ************************************ 00:18:08.538 START TEST raid5f_state_function_test 00:18:08.538 ************************************ 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 false 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@207 -- # local superblock=false 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@222 -- # '[' false = true ']' 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@225 -- # superblock_create_arg= 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@229 -- # raid_pid=83059 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:08.538 Process raid pid: 83059 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83059' 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@231 -- # waitforlisten 83059 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@835 -- # '[' -z 83059 ']' 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:08.538 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:08.538 14:17:45 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:08.798 [2024-11-27 14:17:45.916613] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:18:08.798 [2024-11-27 14:17:45.916861] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:09.057 [2024-11-27 14:17:46.097894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.057 [2024-11-27 14:17:46.225745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:09.316 [2024-11-27 14:17:46.418477] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:09.316 [2024-11-27 14:17:46.418534] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:09.884 14:17:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:09.884 14:17:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@868 -- # return 0 00:18:09.884 14:17:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:09.884 14:17:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.884 14:17:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.884 [2024-11-27 14:17:46.866935] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:09.884 [2024-11-27 14:17:46.867009] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:09.884 [2024-11-27 14:17:46.867026] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:09.884 [2024-11-27 14:17:46.867043] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:09.884 [2024-11-27 14:17:46.867053] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:09.884 [2024-11-27 14:17:46.867068] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:09.884 [2024-11-27 14:17:46.867079] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:09.884 [2024-11-27 14:17:46.867093] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:09.884 14:17:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.884 14:17:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:09.884 14:17:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:09.884 14:17:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:09.884 14:17:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:09.884 14:17:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:09.884 14:17:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:09.884 14:17:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:09.884 14:17:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:09.884 14:17:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:09.884 14:17:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:09.884 14:17:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:09.884 14:17:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:09.884 14:17:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.884 14:17:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:09.884 14:17:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.884 14:17:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:09.884 "name": "Existed_Raid", 00:18:09.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.884 "strip_size_kb": 64, 00:18:09.884 "state": "configuring", 00:18:09.884 "raid_level": "raid5f", 00:18:09.884 "superblock": false, 00:18:09.884 "num_base_bdevs": 4, 00:18:09.884 "num_base_bdevs_discovered": 0, 00:18:09.884 "num_base_bdevs_operational": 4, 00:18:09.884 "base_bdevs_list": [ 00:18:09.884 { 00:18:09.884 "name": "BaseBdev1", 00:18:09.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.884 "is_configured": false, 00:18:09.884 "data_offset": 0, 00:18:09.884 "data_size": 0 00:18:09.884 }, 00:18:09.884 { 00:18:09.884 "name": "BaseBdev2", 00:18:09.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.884 "is_configured": false, 00:18:09.884 "data_offset": 0, 00:18:09.884 "data_size": 0 00:18:09.884 }, 00:18:09.884 { 00:18:09.884 "name": "BaseBdev3", 00:18:09.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.884 "is_configured": false, 00:18:09.884 "data_offset": 0, 00:18:09.884 "data_size": 0 00:18:09.884 }, 00:18:09.884 { 00:18:09.884 "name": "BaseBdev4", 00:18:09.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.884 "is_configured": false, 00:18:09.884 "data_offset": 0, 00:18:09.884 "data_size": 0 00:18:09.884 } 00:18:09.884 ] 00:18:09.884 }' 00:18:09.884 14:17:46 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:09.884 14:17:46 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.141 14:17:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:10.141 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.141 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.141 [2024-11-27 14:17:47.395034] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:10.141 [2024-11-27 14:17:47.395101] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:10.141 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.141 14:17:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:10.141 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.141 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.141 [2024-11-27 14:17:47.407111] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:10.141 [2024-11-27 14:17:47.407185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:10.141 [2024-11-27 14:17:47.407214] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:10.141 [2024-11-27 14:17:47.407229] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:10.141 [2024-11-27 14:17:47.407239] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:10.141 [2024-11-27 14:17:47.407252] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:10.141 [2024-11-27 14:17:47.407261] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:10.141 [2024-11-27 14:17:47.407275] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:10.141 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.141 14:17:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:10.141 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.141 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.400 [2024-11-27 14:17:47.450799] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:10.400 BaseBdev1 00:18:10.400 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.400 14:17:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:10.400 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:10.400 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:10.400 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:10.400 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:10.400 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:10.400 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:10.400 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.400 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.400 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.400 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:10.400 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.400 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.400 [ 00:18:10.400 { 00:18:10.400 "name": "BaseBdev1", 00:18:10.400 "aliases": [ 00:18:10.400 "f3c8eda9-733c-4670-b64e-323a899715f0" 00:18:10.400 ], 00:18:10.400 "product_name": "Malloc disk", 00:18:10.400 "block_size": 512, 00:18:10.400 "num_blocks": 65536, 00:18:10.400 "uuid": "f3c8eda9-733c-4670-b64e-323a899715f0", 00:18:10.400 "assigned_rate_limits": { 00:18:10.400 "rw_ios_per_sec": 0, 00:18:10.401 "rw_mbytes_per_sec": 0, 00:18:10.401 "r_mbytes_per_sec": 0, 00:18:10.401 "w_mbytes_per_sec": 0 00:18:10.401 }, 00:18:10.401 "claimed": true, 00:18:10.401 "claim_type": "exclusive_write", 00:18:10.401 "zoned": false, 00:18:10.401 "supported_io_types": { 00:18:10.401 "read": true, 00:18:10.401 "write": true, 00:18:10.401 "unmap": true, 00:18:10.401 "flush": true, 00:18:10.401 "reset": true, 00:18:10.401 "nvme_admin": false, 00:18:10.401 "nvme_io": false, 00:18:10.401 "nvme_io_md": false, 00:18:10.401 "write_zeroes": true, 00:18:10.401 "zcopy": true, 00:18:10.401 "get_zone_info": false, 00:18:10.401 "zone_management": false, 00:18:10.401 "zone_append": false, 00:18:10.401 "compare": false, 00:18:10.401 "compare_and_write": false, 00:18:10.401 "abort": true, 00:18:10.401 "seek_hole": false, 00:18:10.401 "seek_data": false, 00:18:10.401 "copy": true, 00:18:10.401 "nvme_iov_md": false 00:18:10.401 }, 00:18:10.401 "memory_domains": [ 00:18:10.401 { 00:18:10.401 "dma_device_id": "system", 00:18:10.401 "dma_device_type": 1 00:18:10.401 }, 00:18:10.401 { 00:18:10.401 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:10.401 "dma_device_type": 2 00:18:10.401 } 00:18:10.401 ], 00:18:10.401 "driver_specific": {} 00:18:10.401 } 00:18:10.401 ] 00:18:10.401 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.401 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:10.401 14:17:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:10.401 14:17:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:10.401 14:17:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:10.401 14:17:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:10.401 14:17:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:10.401 14:17:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:10.401 14:17:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.401 14:17:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.401 14:17:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.401 14:17:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.401 14:17:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.401 14:17:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.401 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.401 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.401 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.401 14:17:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.401 "name": "Existed_Raid", 00:18:10.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.401 "strip_size_kb": 64, 00:18:10.401 "state": "configuring", 00:18:10.401 "raid_level": "raid5f", 00:18:10.401 "superblock": false, 00:18:10.401 "num_base_bdevs": 4, 00:18:10.401 "num_base_bdevs_discovered": 1, 00:18:10.401 "num_base_bdevs_operational": 4, 00:18:10.401 "base_bdevs_list": [ 00:18:10.401 { 00:18:10.401 "name": "BaseBdev1", 00:18:10.401 "uuid": "f3c8eda9-733c-4670-b64e-323a899715f0", 00:18:10.401 "is_configured": true, 00:18:10.401 "data_offset": 0, 00:18:10.401 "data_size": 65536 00:18:10.401 }, 00:18:10.401 { 00:18:10.401 "name": "BaseBdev2", 00:18:10.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.401 "is_configured": false, 00:18:10.401 "data_offset": 0, 00:18:10.401 "data_size": 0 00:18:10.401 }, 00:18:10.401 { 00:18:10.401 "name": "BaseBdev3", 00:18:10.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.401 "is_configured": false, 00:18:10.401 "data_offset": 0, 00:18:10.401 "data_size": 0 00:18:10.401 }, 00:18:10.401 { 00:18:10.401 "name": "BaseBdev4", 00:18:10.401 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.401 "is_configured": false, 00:18:10.401 "data_offset": 0, 00:18:10.401 "data_size": 0 00:18:10.401 } 00:18:10.401 ] 00:18:10.401 }' 00:18:10.401 14:17:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.401 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.972 14:17:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:10.972 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.972 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.972 [2024-11-27 14:17:47.979002] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:10.972 [2024-11-27 14:17:47.979076] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:10.972 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.972 14:17:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:10.972 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.972 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.972 [2024-11-27 14:17:47.987066] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:10.972 [2024-11-27 14:17:47.989670] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:10.972 [2024-11-27 14:17:47.989746] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:10.972 [2024-11-27 14:17:47.989763] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:10.972 [2024-11-27 14:17:47.989798] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:10.972 [2024-11-27 14:17:47.989811] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:10.972 [2024-11-27 14:17:47.989824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:10.972 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.972 14:17:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:10.972 14:17:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:10.972 14:17:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:10.972 14:17:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:10.972 14:17:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:10.972 14:17:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:10.972 14:17:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:10.972 14:17:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:10.972 14:17:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:10.972 14:17:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:10.973 14:17:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:10.973 14:17:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:10.973 14:17:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:10.973 14:17:47 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:10.973 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.973 14:17:47 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:10.973 14:17:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.973 14:17:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:10.973 "name": "Existed_Raid", 00:18:10.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.973 "strip_size_kb": 64, 00:18:10.973 "state": "configuring", 00:18:10.973 "raid_level": "raid5f", 00:18:10.973 "superblock": false, 00:18:10.973 "num_base_bdevs": 4, 00:18:10.973 "num_base_bdevs_discovered": 1, 00:18:10.973 "num_base_bdevs_operational": 4, 00:18:10.973 "base_bdevs_list": [ 00:18:10.973 { 00:18:10.973 "name": "BaseBdev1", 00:18:10.973 "uuid": "f3c8eda9-733c-4670-b64e-323a899715f0", 00:18:10.973 "is_configured": true, 00:18:10.973 "data_offset": 0, 00:18:10.973 "data_size": 65536 00:18:10.973 }, 00:18:10.973 { 00:18:10.973 "name": "BaseBdev2", 00:18:10.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.973 "is_configured": false, 00:18:10.973 "data_offset": 0, 00:18:10.973 "data_size": 0 00:18:10.973 }, 00:18:10.973 { 00:18:10.973 "name": "BaseBdev3", 00:18:10.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.973 "is_configured": false, 00:18:10.973 "data_offset": 0, 00:18:10.973 "data_size": 0 00:18:10.973 }, 00:18:10.973 { 00:18:10.973 "name": "BaseBdev4", 00:18:10.973 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:10.973 "is_configured": false, 00:18:10.973 "data_offset": 0, 00:18:10.973 "data_size": 0 00:18:10.973 } 00:18:10.973 ] 00:18:10.973 }' 00:18:10.973 14:17:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:10.973 14:17:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.231 14:17:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:11.231 14:17:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.231 14:17:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.490 [2024-11-27 14:17:48.536699] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:11.490 BaseBdev2 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.490 [ 00:18:11.490 { 00:18:11.490 "name": "BaseBdev2", 00:18:11.490 "aliases": [ 00:18:11.490 "02744808-6142-4ff2-9508-001d20ee67b7" 00:18:11.490 ], 00:18:11.490 "product_name": "Malloc disk", 00:18:11.490 "block_size": 512, 00:18:11.490 "num_blocks": 65536, 00:18:11.490 "uuid": "02744808-6142-4ff2-9508-001d20ee67b7", 00:18:11.490 "assigned_rate_limits": { 00:18:11.490 "rw_ios_per_sec": 0, 00:18:11.490 "rw_mbytes_per_sec": 0, 00:18:11.490 "r_mbytes_per_sec": 0, 00:18:11.490 "w_mbytes_per_sec": 0 00:18:11.490 }, 00:18:11.490 "claimed": true, 00:18:11.490 "claim_type": "exclusive_write", 00:18:11.490 "zoned": false, 00:18:11.490 "supported_io_types": { 00:18:11.490 "read": true, 00:18:11.490 "write": true, 00:18:11.490 "unmap": true, 00:18:11.490 "flush": true, 00:18:11.490 "reset": true, 00:18:11.490 "nvme_admin": false, 00:18:11.490 "nvme_io": false, 00:18:11.490 "nvme_io_md": false, 00:18:11.490 "write_zeroes": true, 00:18:11.490 "zcopy": true, 00:18:11.490 "get_zone_info": false, 00:18:11.490 "zone_management": false, 00:18:11.490 "zone_append": false, 00:18:11.490 "compare": false, 00:18:11.490 "compare_and_write": false, 00:18:11.490 "abort": true, 00:18:11.490 "seek_hole": false, 00:18:11.490 "seek_data": false, 00:18:11.490 "copy": true, 00:18:11.490 "nvme_iov_md": false 00:18:11.490 }, 00:18:11.490 "memory_domains": [ 00:18:11.490 { 00:18:11.490 "dma_device_id": "system", 00:18:11.490 "dma_device_type": 1 00:18:11.490 }, 00:18:11.490 { 00:18:11.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:11.490 "dma_device_type": 2 00:18:11.490 } 00:18:11.490 ], 00:18:11.490 "driver_specific": {} 00:18:11.490 } 00:18:11.490 ] 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:11.490 "name": "Existed_Raid", 00:18:11.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.490 "strip_size_kb": 64, 00:18:11.490 "state": "configuring", 00:18:11.490 "raid_level": "raid5f", 00:18:11.490 "superblock": false, 00:18:11.490 "num_base_bdevs": 4, 00:18:11.490 "num_base_bdevs_discovered": 2, 00:18:11.490 "num_base_bdevs_operational": 4, 00:18:11.490 "base_bdevs_list": [ 00:18:11.490 { 00:18:11.490 "name": "BaseBdev1", 00:18:11.490 "uuid": "f3c8eda9-733c-4670-b64e-323a899715f0", 00:18:11.490 "is_configured": true, 00:18:11.490 "data_offset": 0, 00:18:11.490 "data_size": 65536 00:18:11.490 }, 00:18:11.490 { 00:18:11.490 "name": "BaseBdev2", 00:18:11.490 "uuid": "02744808-6142-4ff2-9508-001d20ee67b7", 00:18:11.490 "is_configured": true, 00:18:11.490 "data_offset": 0, 00:18:11.490 "data_size": 65536 00:18:11.490 }, 00:18:11.490 { 00:18:11.490 "name": "BaseBdev3", 00:18:11.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.490 "is_configured": false, 00:18:11.490 "data_offset": 0, 00:18:11.490 "data_size": 0 00:18:11.490 }, 00:18:11.490 { 00:18:11.490 "name": "BaseBdev4", 00:18:11.490 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:11.490 "is_configured": false, 00:18:11.490 "data_offset": 0, 00:18:11.490 "data_size": 0 00:18:11.490 } 00:18:11.490 ] 00:18:11.490 }' 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:11.490 14:17:48 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.056 [2024-11-27 14:17:49.147489] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:12.056 BaseBdev3 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.056 [ 00:18:12.056 { 00:18:12.056 "name": "BaseBdev3", 00:18:12.056 "aliases": [ 00:18:12.056 "023b8155-dc9c-4d97-8c64-eda055302947" 00:18:12.056 ], 00:18:12.056 "product_name": "Malloc disk", 00:18:12.056 "block_size": 512, 00:18:12.056 "num_blocks": 65536, 00:18:12.056 "uuid": "023b8155-dc9c-4d97-8c64-eda055302947", 00:18:12.056 "assigned_rate_limits": { 00:18:12.056 "rw_ios_per_sec": 0, 00:18:12.056 "rw_mbytes_per_sec": 0, 00:18:12.056 "r_mbytes_per_sec": 0, 00:18:12.056 "w_mbytes_per_sec": 0 00:18:12.056 }, 00:18:12.056 "claimed": true, 00:18:12.056 "claim_type": "exclusive_write", 00:18:12.056 "zoned": false, 00:18:12.056 "supported_io_types": { 00:18:12.056 "read": true, 00:18:12.056 "write": true, 00:18:12.056 "unmap": true, 00:18:12.056 "flush": true, 00:18:12.056 "reset": true, 00:18:12.056 "nvme_admin": false, 00:18:12.056 "nvme_io": false, 00:18:12.056 "nvme_io_md": false, 00:18:12.056 "write_zeroes": true, 00:18:12.056 "zcopy": true, 00:18:12.056 "get_zone_info": false, 00:18:12.056 "zone_management": false, 00:18:12.056 "zone_append": false, 00:18:12.056 "compare": false, 00:18:12.056 "compare_and_write": false, 00:18:12.056 "abort": true, 00:18:12.056 "seek_hole": false, 00:18:12.056 "seek_data": false, 00:18:12.056 "copy": true, 00:18:12.056 "nvme_iov_md": false 00:18:12.056 }, 00:18:12.056 "memory_domains": [ 00:18:12.056 { 00:18:12.056 "dma_device_id": "system", 00:18:12.056 "dma_device_type": 1 00:18:12.056 }, 00:18:12.056 { 00:18:12.056 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.056 "dma_device_type": 2 00:18:12.056 } 00:18:12.056 ], 00:18:12.056 "driver_specific": {} 00:18:12.056 } 00:18:12.056 ] 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.056 "name": "Existed_Raid", 00:18:12.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.056 "strip_size_kb": 64, 00:18:12.056 "state": "configuring", 00:18:12.056 "raid_level": "raid5f", 00:18:12.056 "superblock": false, 00:18:12.056 "num_base_bdevs": 4, 00:18:12.056 "num_base_bdevs_discovered": 3, 00:18:12.056 "num_base_bdevs_operational": 4, 00:18:12.056 "base_bdevs_list": [ 00:18:12.056 { 00:18:12.056 "name": "BaseBdev1", 00:18:12.056 "uuid": "f3c8eda9-733c-4670-b64e-323a899715f0", 00:18:12.056 "is_configured": true, 00:18:12.056 "data_offset": 0, 00:18:12.056 "data_size": 65536 00:18:12.056 }, 00:18:12.056 { 00:18:12.056 "name": "BaseBdev2", 00:18:12.056 "uuid": "02744808-6142-4ff2-9508-001d20ee67b7", 00:18:12.056 "is_configured": true, 00:18:12.056 "data_offset": 0, 00:18:12.056 "data_size": 65536 00:18:12.056 }, 00:18:12.056 { 00:18:12.056 "name": "BaseBdev3", 00:18:12.056 "uuid": "023b8155-dc9c-4d97-8c64-eda055302947", 00:18:12.056 "is_configured": true, 00:18:12.056 "data_offset": 0, 00:18:12.056 "data_size": 65536 00:18:12.056 }, 00:18:12.056 { 00:18:12.056 "name": "BaseBdev4", 00:18:12.056 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.056 "is_configured": false, 00:18:12.056 "data_offset": 0, 00:18:12.056 "data_size": 0 00:18:12.056 } 00:18:12.056 ] 00:18:12.056 }' 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.056 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.623 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:12.623 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.623 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.623 [2024-11-27 14:17:49.716904] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:12.623 [2024-11-27 14:17:49.716997] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:12.623 [2024-11-27 14:17:49.717026] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:12.623 [2024-11-27 14:17:49.717358] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:12.623 [2024-11-27 14:17:49.723771] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:12.623 [2024-11-27 14:17:49.723987] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:12.623 [2024-11-27 14:17:49.724395] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:12.623 BaseBdev4 00:18:12.623 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.623 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:18:12.623 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:18:12.623 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:12.623 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:12.623 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:12.623 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:12.623 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:12.623 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.623 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.623 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.624 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:12.624 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.624 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.624 [ 00:18:12.624 { 00:18:12.624 "name": "BaseBdev4", 00:18:12.624 "aliases": [ 00:18:12.624 "b7b5df7f-389e-480a-bb83-1ecf27deee4d" 00:18:12.624 ], 00:18:12.624 "product_name": "Malloc disk", 00:18:12.624 "block_size": 512, 00:18:12.624 "num_blocks": 65536, 00:18:12.624 "uuid": "b7b5df7f-389e-480a-bb83-1ecf27deee4d", 00:18:12.624 "assigned_rate_limits": { 00:18:12.624 "rw_ios_per_sec": 0, 00:18:12.624 "rw_mbytes_per_sec": 0, 00:18:12.624 "r_mbytes_per_sec": 0, 00:18:12.624 "w_mbytes_per_sec": 0 00:18:12.624 }, 00:18:12.624 "claimed": true, 00:18:12.624 "claim_type": "exclusive_write", 00:18:12.624 "zoned": false, 00:18:12.624 "supported_io_types": { 00:18:12.624 "read": true, 00:18:12.624 "write": true, 00:18:12.624 "unmap": true, 00:18:12.624 "flush": true, 00:18:12.624 "reset": true, 00:18:12.624 "nvme_admin": false, 00:18:12.624 "nvme_io": false, 00:18:12.624 "nvme_io_md": false, 00:18:12.624 "write_zeroes": true, 00:18:12.624 "zcopy": true, 00:18:12.624 "get_zone_info": false, 00:18:12.624 "zone_management": false, 00:18:12.624 "zone_append": false, 00:18:12.624 "compare": false, 00:18:12.624 "compare_and_write": false, 00:18:12.624 "abort": true, 00:18:12.624 "seek_hole": false, 00:18:12.624 "seek_data": false, 00:18:12.624 "copy": true, 00:18:12.624 "nvme_iov_md": false 00:18:12.624 }, 00:18:12.624 "memory_domains": [ 00:18:12.624 { 00:18:12.624 "dma_device_id": "system", 00:18:12.624 "dma_device_type": 1 00:18:12.624 }, 00:18:12.624 { 00:18:12.624 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:12.624 "dma_device_type": 2 00:18:12.624 } 00:18:12.624 ], 00:18:12.624 "driver_specific": {} 00:18:12.624 } 00:18:12.624 ] 00:18:12.624 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.624 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:12.624 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:12.624 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:12.624 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:12.624 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:12.624 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:12.624 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:12.624 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:12.624 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:12.624 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:12.624 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:12.624 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:12.624 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:12.624 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:12.624 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.624 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:12.624 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:12.624 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.624 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:12.624 "name": "Existed_Raid", 00:18:12.624 "uuid": "bf78e273-247b-4aa5-8682-f8d7870bfe8b", 00:18:12.624 "strip_size_kb": 64, 00:18:12.624 "state": "online", 00:18:12.624 "raid_level": "raid5f", 00:18:12.624 "superblock": false, 00:18:12.624 "num_base_bdevs": 4, 00:18:12.624 "num_base_bdevs_discovered": 4, 00:18:12.624 "num_base_bdevs_operational": 4, 00:18:12.624 "base_bdevs_list": [ 00:18:12.624 { 00:18:12.624 "name": "BaseBdev1", 00:18:12.624 "uuid": "f3c8eda9-733c-4670-b64e-323a899715f0", 00:18:12.624 "is_configured": true, 00:18:12.624 "data_offset": 0, 00:18:12.624 "data_size": 65536 00:18:12.624 }, 00:18:12.624 { 00:18:12.624 "name": "BaseBdev2", 00:18:12.624 "uuid": "02744808-6142-4ff2-9508-001d20ee67b7", 00:18:12.624 "is_configured": true, 00:18:12.624 "data_offset": 0, 00:18:12.624 "data_size": 65536 00:18:12.624 }, 00:18:12.624 { 00:18:12.624 "name": "BaseBdev3", 00:18:12.624 "uuid": "023b8155-dc9c-4d97-8c64-eda055302947", 00:18:12.624 "is_configured": true, 00:18:12.624 "data_offset": 0, 00:18:12.624 "data_size": 65536 00:18:12.624 }, 00:18:12.624 { 00:18:12.624 "name": "BaseBdev4", 00:18:12.624 "uuid": "b7b5df7f-389e-480a-bb83-1ecf27deee4d", 00:18:12.624 "is_configured": true, 00:18:12.624 "data_offset": 0, 00:18:12.624 "data_size": 65536 00:18:12.624 } 00:18:12.624 ] 00:18:12.624 }' 00:18:12.624 14:17:49 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:12.624 14:17:49 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.192 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:13.192 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:13.192 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:13.192 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:13.192 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:13.192 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:13.192 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:13.192 14:17:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.192 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:13.192 14:17:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.192 [2024-11-27 14:17:50.320261] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:13.192 14:17:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.192 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:13.192 "name": "Existed_Raid", 00:18:13.192 "aliases": [ 00:18:13.192 "bf78e273-247b-4aa5-8682-f8d7870bfe8b" 00:18:13.192 ], 00:18:13.192 "product_name": "Raid Volume", 00:18:13.192 "block_size": 512, 00:18:13.192 "num_blocks": 196608, 00:18:13.192 "uuid": "bf78e273-247b-4aa5-8682-f8d7870bfe8b", 00:18:13.192 "assigned_rate_limits": { 00:18:13.192 "rw_ios_per_sec": 0, 00:18:13.192 "rw_mbytes_per_sec": 0, 00:18:13.193 "r_mbytes_per_sec": 0, 00:18:13.193 "w_mbytes_per_sec": 0 00:18:13.193 }, 00:18:13.193 "claimed": false, 00:18:13.193 "zoned": false, 00:18:13.193 "supported_io_types": { 00:18:13.193 "read": true, 00:18:13.193 "write": true, 00:18:13.193 "unmap": false, 00:18:13.193 "flush": false, 00:18:13.193 "reset": true, 00:18:13.193 "nvme_admin": false, 00:18:13.193 "nvme_io": false, 00:18:13.193 "nvme_io_md": false, 00:18:13.193 "write_zeroes": true, 00:18:13.193 "zcopy": false, 00:18:13.193 "get_zone_info": false, 00:18:13.193 "zone_management": false, 00:18:13.193 "zone_append": false, 00:18:13.193 "compare": false, 00:18:13.193 "compare_and_write": false, 00:18:13.193 "abort": false, 00:18:13.193 "seek_hole": false, 00:18:13.193 "seek_data": false, 00:18:13.193 "copy": false, 00:18:13.193 "nvme_iov_md": false 00:18:13.193 }, 00:18:13.193 "driver_specific": { 00:18:13.193 "raid": { 00:18:13.193 "uuid": "bf78e273-247b-4aa5-8682-f8d7870bfe8b", 00:18:13.193 "strip_size_kb": 64, 00:18:13.193 "state": "online", 00:18:13.193 "raid_level": "raid5f", 00:18:13.193 "superblock": false, 00:18:13.193 "num_base_bdevs": 4, 00:18:13.193 "num_base_bdevs_discovered": 4, 00:18:13.193 "num_base_bdevs_operational": 4, 00:18:13.193 "base_bdevs_list": [ 00:18:13.193 { 00:18:13.193 "name": "BaseBdev1", 00:18:13.193 "uuid": "f3c8eda9-733c-4670-b64e-323a899715f0", 00:18:13.193 "is_configured": true, 00:18:13.193 "data_offset": 0, 00:18:13.193 "data_size": 65536 00:18:13.193 }, 00:18:13.193 { 00:18:13.193 "name": "BaseBdev2", 00:18:13.193 "uuid": "02744808-6142-4ff2-9508-001d20ee67b7", 00:18:13.193 "is_configured": true, 00:18:13.193 "data_offset": 0, 00:18:13.193 "data_size": 65536 00:18:13.193 }, 00:18:13.193 { 00:18:13.193 "name": "BaseBdev3", 00:18:13.193 "uuid": "023b8155-dc9c-4d97-8c64-eda055302947", 00:18:13.193 "is_configured": true, 00:18:13.193 "data_offset": 0, 00:18:13.193 "data_size": 65536 00:18:13.193 }, 00:18:13.193 { 00:18:13.193 "name": "BaseBdev4", 00:18:13.193 "uuid": "b7b5df7f-389e-480a-bb83-1ecf27deee4d", 00:18:13.193 "is_configured": true, 00:18:13.193 "data_offset": 0, 00:18:13.193 "data_size": 65536 00:18:13.193 } 00:18:13.193 ] 00:18:13.193 } 00:18:13.193 } 00:18:13.193 }' 00:18:13.193 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:13.193 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:13.193 BaseBdev2 00:18:13.193 BaseBdev3 00:18:13.193 BaseBdev4' 00:18:13.193 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:13.193 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:13.193 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:13.193 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:13.193 14:17:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.193 14:17:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.193 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:13.452 14:17:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.452 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:13.452 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:13.452 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:13.452 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:13.452 14:17:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.452 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:13.452 14:17:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.452 14:17:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.452 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:13.452 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:13.452 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:13.452 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:13.452 14:17:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.452 14:17:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.452 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:13.452 14:17:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.452 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:13.452 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:13.452 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:13.452 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:13.452 14:17:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.452 14:17:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.452 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:13.453 14:17:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.453 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:13.453 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:13.453 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:13.453 14:17:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.453 14:17:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.453 [2024-11-27 14:17:50.684126] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:13.711 14:17:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.711 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:13.711 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:18:13.711 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:13.711 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:13.711 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:13.711 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:13.711 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:13.711 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:13.711 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:13.711 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:13.711 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:13.711 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:13.711 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:13.711 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:13.711 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:13.711 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:13.711 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:13.711 14:17:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.711 14:17:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:13.711 14:17:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.711 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:13.711 "name": "Existed_Raid", 00:18:13.711 "uuid": "bf78e273-247b-4aa5-8682-f8d7870bfe8b", 00:18:13.711 "strip_size_kb": 64, 00:18:13.711 "state": "online", 00:18:13.711 "raid_level": "raid5f", 00:18:13.711 "superblock": false, 00:18:13.712 "num_base_bdevs": 4, 00:18:13.712 "num_base_bdevs_discovered": 3, 00:18:13.712 "num_base_bdevs_operational": 3, 00:18:13.712 "base_bdevs_list": [ 00:18:13.712 { 00:18:13.712 "name": null, 00:18:13.712 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:13.712 "is_configured": false, 00:18:13.712 "data_offset": 0, 00:18:13.712 "data_size": 65536 00:18:13.712 }, 00:18:13.712 { 00:18:13.712 "name": "BaseBdev2", 00:18:13.712 "uuid": "02744808-6142-4ff2-9508-001d20ee67b7", 00:18:13.712 "is_configured": true, 00:18:13.712 "data_offset": 0, 00:18:13.712 "data_size": 65536 00:18:13.712 }, 00:18:13.712 { 00:18:13.712 "name": "BaseBdev3", 00:18:13.712 "uuid": "023b8155-dc9c-4d97-8c64-eda055302947", 00:18:13.712 "is_configured": true, 00:18:13.712 "data_offset": 0, 00:18:13.712 "data_size": 65536 00:18:13.712 }, 00:18:13.712 { 00:18:13.712 "name": "BaseBdev4", 00:18:13.712 "uuid": "b7b5df7f-389e-480a-bb83-1ecf27deee4d", 00:18:13.712 "is_configured": true, 00:18:13.712 "data_offset": 0, 00:18:13.712 "data_size": 65536 00:18:13.712 } 00:18:13.712 ] 00:18:13.712 }' 00:18:13.712 14:17:50 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:13.712 14:17:50 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.279 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:14.279 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:14.279 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.279 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:14.279 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.279 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.279 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.279 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:14.279 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:14.279 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:14.279 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.279 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.279 [2024-11-27 14:17:51.333824] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:14.279 [2024-11-27 14:17:51.334103] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:14.279 [2024-11-27 14:17:51.415839] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:14.279 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.279 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:14.279 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:14.279 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:14.279 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.279 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.279 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.279 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.279 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:14.279 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:14.279 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:14.279 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.279 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.279 [2024-11-27 14:17:51.479883] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.539 [2024-11-27 14:17:51.630064] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:14.539 [2024-11-27 14:17:51.630144] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.539 BaseBdev2 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.539 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.799 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.799 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:14.799 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.799 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.799 [ 00:18:14.799 { 00:18:14.799 "name": "BaseBdev2", 00:18:14.799 "aliases": [ 00:18:14.799 "d178a5c4-9e1e-48bf-a7c1-f1f16050b7cd" 00:18:14.799 ], 00:18:14.799 "product_name": "Malloc disk", 00:18:14.799 "block_size": 512, 00:18:14.799 "num_blocks": 65536, 00:18:14.799 "uuid": "d178a5c4-9e1e-48bf-a7c1-f1f16050b7cd", 00:18:14.799 "assigned_rate_limits": { 00:18:14.799 "rw_ios_per_sec": 0, 00:18:14.799 "rw_mbytes_per_sec": 0, 00:18:14.799 "r_mbytes_per_sec": 0, 00:18:14.799 "w_mbytes_per_sec": 0 00:18:14.799 }, 00:18:14.799 "claimed": false, 00:18:14.799 "zoned": false, 00:18:14.799 "supported_io_types": { 00:18:14.799 "read": true, 00:18:14.799 "write": true, 00:18:14.799 "unmap": true, 00:18:14.799 "flush": true, 00:18:14.799 "reset": true, 00:18:14.799 "nvme_admin": false, 00:18:14.799 "nvme_io": false, 00:18:14.799 "nvme_io_md": false, 00:18:14.799 "write_zeroes": true, 00:18:14.799 "zcopy": true, 00:18:14.799 "get_zone_info": false, 00:18:14.799 "zone_management": false, 00:18:14.799 "zone_append": false, 00:18:14.799 "compare": false, 00:18:14.799 "compare_and_write": false, 00:18:14.799 "abort": true, 00:18:14.799 "seek_hole": false, 00:18:14.799 "seek_data": false, 00:18:14.799 "copy": true, 00:18:14.799 "nvme_iov_md": false 00:18:14.799 }, 00:18:14.799 "memory_domains": [ 00:18:14.799 { 00:18:14.799 "dma_device_id": "system", 00:18:14.799 "dma_device_type": 1 00:18:14.799 }, 00:18:14.799 { 00:18:14.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.799 "dma_device_type": 2 00:18:14.799 } 00:18:14.799 ], 00:18:14.799 "driver_specific": {} 00:18:14.799 } 00:18:14.799 ] 00:18:14.799 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.799 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:14.799 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:14.799 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:14.799 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:14.799 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.799 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.799 BaseBdev3 00:18:14.799 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.799 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:14.799 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:14.799 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:14.799 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:14.799 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:14.799 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:14.799 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:14.799 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.799 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.799 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.799 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:14.799 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.799 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.799 [ 00:18:14.799 { 00:18:14.799 "name": "BaseBdev3", 00:18:14.799 "aliases": [ 00:18:14.799 "a107b79a-319f-4e29-8bcb-7cf671875cc1" 00:18:14.799 ], 00:18:14.799 "product_name": "Malloc disk", 00:18:14.799 "block_size": 512, 00:18:14.799 "num_blocks": 65536, 00:18:14.799 "uuid": "a107b79a-319f-4e29-8bcb-7cf671875cc1", 00:18:14.800 "assigned_rate_limits": { 00:18:14.800 "rw_ios_per_sec": 0, 00:18:14.800 "rw_mbytes_per_sec": 0, 00:18:14.800 "r_mbytes_per_sec": 0, 00:18:14.800 "w_mbytes_per_sec": 0 00:18:14.800 }, 00:18:14.800 "claimed": false, 00:18:14.800 "zoned": false, 00:18:14.800 "supported_io_types": { 00:18:14.800 "read": true, 00:18:14.800 "write": true, 00:18:14.800 "unmap": true, 00:18:14.800 "flush": true, 00:18:14.800 "reset": true, 00:18:14.800 "nvme_admin": false, 00:18:14.800 "nvme_io": false, 00:18:14.800 "nvme_io_md": false, 00:18:14.800 "write_zeroes": true, 00:18:14.800 "zcopy": true, 00:18:14.800 "get_zone_info": false, 00:18:14.800 "zone_management": false, 00:18:14.800 "zone_append": false, 00:18:14.800 "compare": false, 00:18:14.800 "compare_and_write": false, 00:18:14.800 "abort": true, 00:18:14.800 "seek_hole": false, 00:18:14.800 "seek_data": false, 00:18:14.800 "copy": true, 00:18:14.800 "nvme_iov_md": false 00:18:14.800 }, 00:18:14.800 "memory_domains": [ 00:18:14.800 { 00:18:14.800 "dma_device_id": "system", 00:18:14.800 "dma_device_type": 1 00:18:14.800 }, 00:18:14.800 { 00:18:14.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.800 "dma_device_type": 2 00:18:14.800 } 00:18:14.800 ], 00:18:14.800 "driver_specific": {} 00:18:14.800 } 00:18:14.800 ] 00:18:14.800 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.800 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:14.800 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:14.800 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:14.800 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:14.800 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.800 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.800 BaseBdev4 00:18:14.800 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.800 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:14.800 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:18:14.800 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:14.800 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:14.800 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:14.800 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:14.800 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:14.800 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.800 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.800 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.800 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:14.800 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.800 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.800 [ 00:18:14.800 { 00:18:14.800 "name": "BaseBdev4", 00:18:14.800 "aliases": [ 00:18:14.800 "2277d006-87d3-4783-bd3b-c85b5c702f3e" 00:18:14.800 ], 00:18:14.800 "product_name": "Malloc disk", 00:18:14.800 "block_size": 512, 00:18:14.800 "num_blocks": 65536, 00:18:14.800 "uuid": "2277d006-87d3-4783-bd3b-c85b5c702f3e", 00:18:14.800 "assigned_rate_limits": { 00:18:14.800 "rw_ios_per_sec": 0, 00:18:14.800 "rw_mbytes_per_sec": 0, 00:18:14.800 "r_mbytes_per_sec": 0, 00:18:14.800 "w_mbytes_per_sec": 0 00:18:14.800 }, 00:18:14.800 "claimed": false, 00:18:14.800 "zoned": false, 00:18:14.800 "supported_io_types": { 00:18:14.800 "read": true, 00:18:14.800 "write": true, 00:18:14.800 "unmap": true, 00:18:14.800 "flush": true, 00:18:14.800 "reset": true, 00:18:14.800 "nvme_admin": false, 00:18:14.800 "nvme_io": false, 00:18:14.800 "nvme_io_md": false, 00:18:14.800 "write_zeroes": true, 00:18:14.800 "zcopy": true, 00:18:14.800 "get_zone_info": false, 00:18:14.800 "zone_management": false, 00:18:14.800 "zone_append": false, 00:18:14.800 "compare": false, 00:18:14.800 "compare_and_write": false, 00:18:14.800 "abort": true, 00:18:14.800 "seek_hole": false, 00:18:14.800 "seek_data": false, 00:18:14.800 "copy": true, 00:18:14.800 "nvme_iov_md": false 00:18:14.800 }, 00:18:14.800 "memory_domains": [ 00:18:14.800 { 00:18:14.800 "dma_device_id": "system", 00:18:14.800 "dma_device_type": 1 00:18:14.800 }, 00:18:14.800 { 00:18:14.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:14.800 "dma_device_type": 2 00:18:14.800 } 00:18:14.800 ], 00:18:14.800 "driver_specific": {} 00:18:14.800 } 00:18:14.800 ] 00:18:14.800 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.800 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:14.800 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:14.800 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:14.800 14:17:51 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:14.800 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.800 14:17:51 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.800 [2024-11-27 14:17:52.000854] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:14.800 [2024-11-27 14:17:52.000930] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:14.800 [2024-11-27 14:17:52.000965] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:14.800 [2024-11-27 14:17:52.003488] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:14.800 [2024-11-27 14:17:52.003737] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:14.800 14:17:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.800 14:17:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:14.800 14:17:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:14.800 14:17:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:14.800 14:17:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:14.800 14:17:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:14.800 14:17:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:14.800 14:17:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:14.800 14:17:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:14.800 14:17:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:14.800 14:17:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:14.800 14:17:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:14.800 14:17:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.800 14:17:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:14.800 14:17:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:14.800 14:17:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.800 14:17:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:14.800 "name": "Existed_Raid", 00:18:14.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.800 "strip_size_kb": 64, 00:18:14.800 "state": "configuring", 00:18:14.800 "raid_level": "raid5f", 00:18:14.800 "superblock": false, 00:18:14.800 "num_base_bdevs": 4, 00:18:14.800 "num_base_bdevs_discovered": 3, 00:18:14.800 "num_base_bdevs_operational": 4, 00:18:14.800 "base_bdevs_list": [ 00:18:14.800 { 00:18:14.800 "name": "BaseBdev1", 00:18:14.800 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.800 "is_configured": false, 00:18:14.800 "data_offset": 0, 00:18:14.800 "data_size": 0 00:18:14.800 }, 00:18:14.800 { 00:18:14.800 "name": "BaseBdev2", 00:18:14.800 "uuid": "d178a5c4-9e1e-48bf-a7c1-f1f16050b7cd", 00:18:14.800 "is_configured": true, 00:18:14.800 "data_offset": 0, 00:18:14.800 "data_size": 65536 00:18:14.800 }, 00:18:14.800 { 00:18:14.800 "name": "BaseBdev3", 00:18:14.800 "uuid": "a107b79a-319f-4e29-8bcb-7cf671875cc1", 00:18:14.800 "is_configured": true, 00:18:14.800 "data_offset": 0, 00:18:14.800 "data_size": 65536 00:18:14.800 }, 00:18:14.800 { 00:18:14.800 "name": "BaseBdev4", 00:18:14.800 "uuid": "2277d006-87d3-4783-bd3b-c85b5c702f3e", 00:18:14.800 "is_configured": true, 00:18:14.800 "data_offset": 0, 00:18:14.800 "data_size": 65536 00:18:14.800 } 00:18:14.800 ] 00:18:14.800 }' 00:18:14.800 14:17:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:14.800 14:17:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.367 14:17:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:15.367 14:17:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.367 14:17:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.367 [2024-11-27 14:17:52.561042] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:15.367 14:17:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.367 14:17:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:15.367 14:17:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:15.367 14:17:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:15.367 14:17:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:15.367 14:17:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:15.367 14:17:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:15.367 14:17:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.367 14:17:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.367 14:17:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.367 14:17:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:15.367 14:17:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:15.367 14:17:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.367 14:17:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.368 14:17:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.368 14:17:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.368 14:17:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:15.368 "name": "Existed_Raid", 00:18:15.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.368 "strip_size_kb": 64, 00:18:15.368 "state": "configuring", 00:18:15.368 "raid_level": "raid5f", 00:18:15.368 "superblock": false, 00:18:15.368 "num_base_bdevs": 4, 00:18:15.368 "num_base_bdevs_discovered": 2, 00:18:15.368 "num_base_bdevs_operational": 4, 00:18:15.368 "base_bdevs_list": [ 00:18:15.368 { 00:18:15.368 "name": "BaseBdev1", 00:18:15.368 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:15.368 "is_configured": false, 00:18:15.368 "data_offset": 0, 00:18:15.368 "data_size": 0 00:18:15.368 }, 00:18:15.368 { 00:18:15.368 "name": null, 00:18:15.368 "uuid": "d178a5c4-9e1e-48bf-a7c1-f1f16050b7cd", 00:18:15.368 "is_configured": false, 00:18:15.368 "data_offset": 0, 00:18:15.368 "data_size": 65536 00:18:15.368 }, 00:18:15.368 { 00:18:15.368 "name": "BaseBdev3", 00:18:15.368 "uuid": "a107b79a-319f-4e29-8bcb-7cf671875cc1", 00:18:15.368 "is_configured": true, 00:18:15.368 "data_offset": 0, 00:18:15.368 "data_size": 65536 00:18:15.368 }, 00:18:15.368 { 00:18:15.368 "name": "BaseBdev4", 00:18:15.368 "uuid": "2277d006-87d3-4783-bd3b-c85b5c702f3e", 00:18:15.368 "is_configured": true, 00:18:15.368 "data_offset": 0, 00:18:15.368 "data_size": 65536 00:18:15.368 } 00:18:15.368 ] 00:18:15.368 }' 00:18:15.368 14:17:52 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:15.368 14:17:52 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.934 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:15.934 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:15.934 14:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.934 14:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.934 14:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.934 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:15.934 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:15.934 14:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.934 14:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.934 [2024-11-27 14:17:53.171657] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:15.934 BaseBdev1 00:18:15.934 14:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.934 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:15.934 14:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:15.934 14:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:15.934 14:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:15.934 14:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:15.934 14:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:15.934 14:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:15.934 14:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.934 14:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.934 14:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.934 14:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:15.934 14:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.934 14:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:15.934 [ 00:18:15.934 { 00:18:15.934 "name": "BaseBdev1", 00:18:15.934 "aliases": [ 00:18:15.934 "1323b430-242c-4dd9-b6f4-303286fb31ae" 00:18:15.934 ], 00:18:15.934 "product_name": "Malloc disk", 00:18:15.934 "block_size": 512, 00:18:15.934 "num_blocks": 65536, 00:18:15.934 "uuid": "1323b430-242c-4dd9-b6f4-303286fb31ae", 00:18:15.934 "assigned_rate_limits": { 00:18:15.934 "rw_ios_per_sec": 0, 00:18:15.934 "rw_mbytes_per_sec": 0, 00:18:15.934 "r_mbytes_per_sec": 0, 00:18:15.934 "w_mbytes_per_sec": 0 00:18:15.934 }, 00:18:15.934 "claimed": true, 00:18:15.934 "claim_type": "exclusive_write", 00:18:15.934 "zoned": false, 00:18:15.934 "supported_io_types": { 00:18:15.934 "read": true, 00:18:15.934 "write": true, 00:18:15.934 "unmap": true, 00:18:15.934 "flush": true, 00:18:15.934 "reset": true, 00:18:15.935 "nvme_admin": false, 00:18:15.935 "nvme_io": false, 00:18:15.935 "nvme_io_md": false, 00:18:15.935 "write_zeroes": true, 00:18:15.935 "zcopy": true, 00:18:15.935 "get_zone_info": false, 00:18:15.935 "zone_management": false, 00:18:15.935 "zone_append": false, 00:18:15.935 "compare": false, 00:18:15.935 "compare_and_write": false, 00:18:15.935 "abort": true, 00:18:15.935 "seek_hole": false, 00:18:15.935 "seek_data": false, 00:18:15.935 "copy": true, 00:18:15.935 "nvme_iov_md": false 00:18:15.935 }, 00:18:15.935 "memory_domains": [ 00:18:15.935 { 00:18:15.935 "dma_device_id": "system", 00:18:15.935 "dma_device_type": 1 00:18:15.935 }, 00:18:15.935 { 00:18:15.935 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:15.935 "dma_device_type": 2 00:18:15.935 } 00:18:15.935 ], 00:18:15.935 "driver_specific": {} 00:18:15.935 } 00:18:15.935 ] 00:18:15.935 14:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.935 14:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:15.935 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:15.935 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:15.935 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:15.935 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:15.935 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:15.935 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:15.935 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:15.935 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:15.935 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:15.935 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.218 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.218 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.218 14:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.218 14:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.218 14:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.218 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.218 "name": "Existed_Raid", 00:18:16.218 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.218 "strip_size_kb": 64, 00:18:16.218 "state": "configuring", 00:18:16.218 "raid_level": "raid5f", 00:18:16.218 "superblock": false, 00:18:16.218 "num_base_bdevs": 4, 00:18:16.218 "num_base_bdevs_discovered": 3, 00:18:16.218 "num_base_bdevs_operational": 4, 00:18:16.218 "base_bdevs_list": [ 00:18:16.218 { 00:18:16.218 "name": "BaseBdev1", 00:18:16.218 "uuid": "1323b430-242c-4dd9-b6f4-303286fb31ae", 00:18:16.218 "is_configured": true, 00:18:16.218 "data_offset": 0, 00:18:16.218 "data_size": 65536 00:18:16.218 }, 00:18:16.218 { 00:18:16.218 "name": null, 00:18:16.218 "uuid": "d178a5c4-9e1e-48bf-a7c1-f1f16050b7cd", 00:18:16.218 "is_configured": false, 00:18:16.218 "data_offset": 0, 00:18:16.218 "data_size": 65536 00:18:16.218 }, 00:18:16.218 { 00:18:16.218 "name": "BaseBdev3", 00:18:16.218 "uuid": "a107b79a-319f-4e29-8bcb-7cf671875cc1", 00:18:16.218 "is_configured": true, 00:18:16.218 "data_offset": 0, 00:18:16.218 "data_size": 65536 00:18:16.218 }, 00:18:16.218 { 00:18:16.218 "name": "BaseBdev4", 00:18:16.218 "uuid": "2277d006-87d3-4783-bd3b-c85b5c702f3e", 00:18:16.218 "is_configured": true, 00:18:16.218 "data_offset": 0, 00:18:16.218 "data_size": 65536 00:18:16.218 } 00:18:16.218 ] 00:18:16.218 }' 00:18:16.218 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.218 14:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.483 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.483 14:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.483 14:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.483 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:16.483 14:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.746 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:16.746 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:16.746 14:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.746 14:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.746 [2024-11-27 14:17:53.771966] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:16.746 14:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.746 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:16.746 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:16.746 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:16.746 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:16.746 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:16.746 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:16.746 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:16.746 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:16.746 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:16.746 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:16.746 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:16.746 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:16.746 14:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.746 14:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:16.746 14:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.746 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:16.746 "name": "Existed_Raid", 00:18:16.746 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.746 "strip_size_kb": 64, 00:18:16.746 "state": "configuring", 00:18:16.746 "raid_level": "raid5f", 00:18:16.746 "superblock": false, 00:18:16.746 "num_base_bdevs": 4, 00:18:16.746 "num_base_bdevs_discovered": 2, 00:18:16.746 "num_base_bdevs_operational": 4, 00:18:16.746 "base_bdevs_list": [ 00:18:16.747 { 00:18:16.747 "name": "BaseBdev1", 00:18:16.747 "uuid": "1323b430-242c-4dd9-b6f4-303286fb31ae", 00:18:16.747 "is_configured": true, 00:18:16.747 "data_offset": 0, 00:18:16.747 "data_size": 65536 00:18:16.747 }, 00:18:16.747 { 00:18:16.747 "name": null, 00:18:16.747 "uuid": "d178a5c4-9e1e-48bf-a7c1-f1f16050b7cd", 00:18:16.747 "is_configured": false, 00:18:16.747 "data_offset": 0, 00:18:16.747 "data_size": 65536 00:18:16.747 }, 00:18:16.747 { 00:18:16.747 "name": null, 00:18:16.747 "uuid": "a107b79a-319f-4e29-8bcb-7cf671875cc1", 00:18:16.747 "is_configured": false, 00:18:16.747 "data_offset": 0, 00:18:16.747 "data_size": 65536 00:18:16.747 }, 00:18:16.747 { 00:18:16.747 "name": "BaseBdev4", 00:18:16.747 "uuid": "2277d006-87d3-4783-bd3b-c85b5c702f3e", 00:18:16.747 "is_configured": true, 00:18:16.747 "data_offset": 0, 00:18:16.747 "data_size": 65536 00:18:16.747 } 00:18:16.747 ] 00:18:16.747 }' 00:18:16.747 14:17:53 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:16.747 14:17:53 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.315 14:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.315 14:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:17.315 14:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.315 14:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.315 14:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.315 14:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:17.315 14:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:17.315 14:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.315 14:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.315 [2024-11-27 14:17:54.360134] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:17.315 14:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.315 14:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:17.315 14:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:17.315 14:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:17.315 14:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:17.315 14:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:17.315 14:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:17.315 14:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.315 14:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.315 14:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.315 14:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.315 14:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.315 14:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.315 14:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.315 14:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.315 14:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.315 14:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.315 "name": "Existed_Raid", 00:18:17.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.315 "strip_size_kb": 64, 00:18:17.315 "state": "configuring", 00:18:17.315 "raid_level": "raid5f", 00:18:17.315 "superblock": false, 00:18:17.315 "num_base_bdevs": 4, 00:18:17.315 "num_base_bdevs_discovered": 3, 00:18:17.315 "num_base_bdevs_operational": 4, 00:18:17.315 "base_bdevs_list": [ 00:18:17.315 { 00:18:17.315 "name": "BaseBdev1", 00:18:17.315 "uuid": "1323b430-242c-4dd9-b6f4-303286fb31ae", 00:18:17.315 "is_configured": true, 00:18:17.315 "data_offset": 0, 00:18:17.315 "data_size": 65536 00:18:17.315 }, 00:18:17.315 { 00:18:17.315 "name": null, 00:18:17.315 "uuid": "d178a5c4-9e1e-48bf-a7c1-f1f16050b7cd", 00:18:17.315 "is_configured": false, 00:18:17.315 "data_offset": 0, 00:18:17.315 "data_size": 65536 00:18:17.315 }, 00:18:17.315 { 00:18:17.315 "name": "BaseBdev3", 00:18:17.315 "uuid": "a107b79a-319f-4e29-8bcb-7cf671875cc1", 00:18:17.315 "is_configured": true, 00:18:17.315 "data_offset": 0, 00:18:17.315 "data_size": 65536 00:18:17.315 }, 00:18:17.315 { 00:18:17.315 "name": "BaseBdev4", 00:18:17.315 "uuid": "2277d006-87d3-4783-bd3b-c85b5c702f3e", 00:18:17.315 "is_configured": true, 00:18:17.315 "data_offset": 0, 00:18:17.315 "data_size": 65536 00:18:17.315 } 00:18:17.315 ] 00:18:17.315 }' 00:18:17.315 14:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.316 14:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.884 14:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.884 14:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.884 14:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.884 14:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:17.884 14:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.884 14:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:17.884 14:17:54 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:17.884 14:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.884 14:17:54 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.884 [2024-11-27 14:17:54.928330] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:17.884 14:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.884 14:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:17.884 14:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:17.884 14:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:17.884 14:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:17.884 14:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:17.884 14:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:17.884 14:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:17.884 14:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:17.884 14:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:17.884 14:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:17.884 14:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:17.884 14:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:17.884 14:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.884 14:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:17.884 14:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.884 14:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:17.884 "name": "Existed_Raid", 00:18:17.884 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.884 "strip_size_kb": 64, 00:18:17.884 "state": "configuring", 00:18:17.884 "raid_level": "raid5f", 00:18:17.884 "superblock": false, 00:18:17.884 "num_base_bdevs": 4, 00:18:17.884 "num_base_bdevs_discovered": 2, 00:18:17.884 "num_base_bdevs_operational": 4, 00:18:17.884 "base_bdevs_list": [ 00:18:17.884 { 00:18:17.884 "name": null, 00:18:17.884 "uuid": "1323b430-242c-4dd9-b6f4-303286fb31ae", 00:18:17.884 "is_configured": false, 00:18:17.884 "data_offset": 0, 00:18:17.884 "data_size": 65536 00:18:17.884 }, 00:18:17.884 { 00:18:17.884 "name": null, 00:18:17.884 "uuid": "d178a5c4-9e1e-48bf-a7c1-f1f16050b7cd", 00:18:17.884 "is_configured": false, 00:18:17.884 "data_offset": 0, 00:18:17.884 "data_size": 65536 00:18:17.884 }, 00:18:17.884 { 00:18:17.884 "name": "BaseBdev3", 00:18:17.884 "uuid": "a107b79a-319f-4e29-8bcb-7cf671875cc1", 00:18:17.884 "is_configured": true, 00:18:17.884 "data_offset": 0, 00:18:17.884 "data_size": 65536 00:18:17.884 }, 00:18:17.884 { 00:18:17.884 "name": "BaseBdev4", 00:18:17.884 "uuid": "2277d006-87d3-4783-bd3b-c85b5c702f3e", 00:18:17.884 "is_configured": true, 00:18:17.884 "data_offset": 0, 00:18:17.884 "data_size": 65536 00:18:17.884 } 00:18:17.884 ] 00:18:17.884 }' 00:18:17.884 14:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:17.884 14:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.451 14:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.451 14:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:18.451 14:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.451 14:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.451 14:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.451 14:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:18.451 14:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:18.451 14:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.451 14:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.452 [2024-11-27 14:17:55.564853] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:18.452 14:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.452 14:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:18.452 14:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:18.452 14:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:18.452 14:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:18.452 14:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:18.452 14:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:18.452 14:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:18.452 14:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:18.452 14:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:18.452 14:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:18.452 14:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:18.452 14:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:18.452 14:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.452 14:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:18.452 14:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.452 14:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:18.452 "name": "Existed_Raid", 00:18:18.452 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:18.452 "strip_size_kb": 64, 00:18:18.452 "state": "configuring", 00:18:18.452 "raid_level": "raid5f", 00:18:18.452 "superblock": false, 00:18:18.452 "num_base_bdevs": 4, 00:18:18.452 "num_base_bdevs_discovered": 3, 00:18:18.452 "num_base_bdevs_operational": 4, 00:18:18.452 "base_bdevs_list": [ 00:18:18.452 { 00:18:18.452 "name": null, 00:18:18.452 "uuid": "1323b430-242c-4dd9-b6f4-303286fb31ae", 00:18:18.452 "is_configured": false, 00:18:18.452 "data_offset": 0, 00:18:18.452 "data_size": 65536 00:18:18.452 }, 00:18:18.452 { 00:18:18.452 "name": "BaseBdev2", 00:18:18.452 "uuid": "d178a5c4-9e1e-48bf-a7c1-f1f16050b7cd", 00:18:18.452 "is_configured": true, 00:18:18.452 "data_offset": 0, 00:18:18.452 "data_size": 65536 00:18:18.452 }, 00:18:18.452 { 00:18:18.452 "name": "BaseBdev3", 00:18:18.452 "uuid": "a107b79a-319f-4e29-8bcb-7cf671875cc1", 00:18:18.452 "is_configured": true, 00:18:18.452 "data_offset": 0, 00:18:18.452 "data_size": 65536 00:18:18.452 }, 00:18:18.452 { 00:18:18.452 "name": "BaseBdev4", 00:18:18.452 "uuid": "2277d006-87d3-4783-bd3b-c85b5c702f3e", 00:18:18.452 "is_configured": true, 00:18:18.452 "data_offset": 0, 00:18:18.452 "data_size": 65536 00:18:18.452 } 00:18:18.452 ] 00:18:18.452 }' 00:18:18.452 14:17:55 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:18.452 14:17:55 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.019 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.019 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:19.019 14:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u 1323b430-242c-4dd9-b6f4-303286fb31ae 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.020 [2024-11-27 14:17:56.207519] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:19.020 [2024-11-27 14:17:56.207782] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:19.020 [2024-11-27 14:17:56.207841] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:19.020 [2024-11-27 14:17:56.208231] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:19.020 [2024-11-27 14:17:56.214238] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:19.020 [2024-11-27 14:17:56.214422] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:19.020 [2024-11-27 14:17:56.214817] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:19.020 NewBaseBdev 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@905 -- # local i 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.020 [ 00:18:19.020 { 00:18:19.020 "name": "NewBaseBdev", 00:18:19.020 "aliases": [ 00:18:19.020 "1323b430-242c-4dd9-b6f4-303286fb31ae" 00:18:19.020 ], 00:18:19.020 "product_name": "Malloc disk", 00:18:19.020 "block_size": 512, 00:18:19.020 "num_blocks": 65536, 00:18:19.020 "uuid": "1323b430-242c-4dd9-b6f4-303286fb31ae", 00:18:19.020 "assigned_rate_limits": { 00:18:19.020 "rw_ios_per_sec": 0, 00:18:19.020 "rw_mbytes_per_sec": 0, 00:18:19.020 "r_mbytes_per_sec": 0, 00:18:19.020 "w_mbytes_per_sec": 0 00:18:19.020 }, 00:18:19.020 "claimed": true, 00:18:19.020 "claim_type": "exclusive_write", 00:18:19.020 "zoned": false, 00:18:19.020 "supported_io_types": { 00:18:19.020 "read": true, 00:18:19.020 "write": true, 00:18:19.020 "unmap": true, 00:18:19.020 "flush": true, 00:18:19.020 "reset": true, 00:18:19.020 "nvme_admin": false, 00:18:19.020 "nvme_io": false, 00:18:19.020 "nvme_io_md": false, 00:18:19.020 "write_zeroes": true, 00:18:19.020 "zcopy": true, 00:18:19.020 "get_zone_info": false, 00:18:19.020 "zone_management": false, 00:18:19.020 "zone_append": false, 00:18:19.020 "compare": false, 00:18:19.020 "compare_and_write": false, 00:18:19.020 "abort": true, 00:18:19.020 "seek_hole": false, 00:18:19.020 "seek_data": false, 00:18:19.020 "copy": true, 00:18:19.020 "nvme_iov_md": false 00:18:19.020 }, 00:18:19.020 "memory_domains": [ 00:18:19.020 { 00:18:19.020 "dma_device_id": "system", 00:18:19.020 "dma_device_type": 1 00:18:19.020 }, 00:18:19.020 { 00:18:19.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:19.020 "dma_device_type": 2 00:18:19.020 } 00:18:19.020 ], 00:18:19.020 "driver_specific": {} 00:18:19.020 } 00:18:19.020 ] 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@911 -- # return 0 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:19.020 14:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.280 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:19.280 "name": "Existed_Raid", 00:18:19.280 "uuid": "0bd6dd2b-a0ff-4d57-bc6e-83c2e24df039", 00:18:19.280 "strip_size_kb": 64, 00:18:19.280 "state": "online", 00:18:19.280 "raid_level": "raid5f", 00:18:19.280 "superblock": false, 00:18:19.280 "num_base_bdevs": 4, 00:18:19.280 "num_base_bdevs_discovered": 4, 00:18:19.280 "num_base_bdevs_operational": 4, 00:18:19.280 "base_bdevs_list": [ 00:18:19.280 { 00:18:19.280 "name": "NewBaseBdev", 00:18:19.280 "uuid": "1323b430-242c-4dd9-b6f4-303286fb31ae", 00:18:19.280 "is_configured": true, 00:18:19.280 "data_offset": 0, 00:18:19.280 "data_size": 65536 00:18:19.280 }, 00:18:19.280 { 00:18:19.280 "name": "BaseBdev2", 00:18:19.280 "uuid": "d178a5c4-9e1e-48bf-a7c1-f1f16050b7cd", 00:18:19.280 "is_configured": true, 00:18:19.280 "data_offset": 0, 00:18:19.280 "data_size": 65536 00:18:19.280 }, 00:18:19.280 { 00:18:19.280 "name": "BaseBdev3", 00:18:19.280 "uuid": "a107b79a-319f-4e29-8bcb-7cf671875cc1", 00:18:19.280 "is_configured": true, 00:18:19.280 "data_offset": 0, 00:18:19.280 "data_size": 65536 00:18:19.280 }, 00:18:19.280 { 00:18:19.280 "name": "BaseBdev4", 00:18:19.280 "uuid": "2277d006-87d3-4783-bd3b-c85b5c702f3e", 00:18:19.280 "is_configured": true, 00:18:19.280 "data_offset": 0, 00:18:19.280 "data_size": 65536 00:18:19.280 } 00:18:19.280 ] 00:18:19.280 }' 00:18:19.280 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:19.280 14:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.850 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:19.850 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:19.850 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:19.850 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:19.850 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:19.850 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:19.850 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:19.850 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:19.850 14:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.850 14:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.850 [2024-11-27 14:17:56.826959] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:19.850 14:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.850 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:19.850 "name": "Existed_Raid", 00:18:19.850 "aliases": [ 00:18:19.850 "0bd6dd2b-a0ff-4d57-bc6e-83c2e24df039" 00:18:19.850 ], 00:18:19.850 "product_name": "Raid Volume", 00:18:19.850 "block_size": 512, 00:18:19.850 "num_blocks": 196608, 00:18:19.850 "uuid": "0bd6dd2b-a0ff-4d57-bc6e-83c2e24df039", 00:18:19.850 "assigned_rate_limits": { 00:18:19.850 "rw_ios_per_sec": 0, 00:18:19.850 "rw_mbytes_per_sec": 0, 00:18:19.850 "r_mbytes_per_sec": 0, 00:18:19.850 "w_mbytes_per_sec": 0 00:18:19.850 }, 00:18:19.850 "claimed": false, 00:18:19.850 "zoned": false, 00:18:19.850 "supported_io_types": { 00:18:19.850 "read": true, 00:18:19.850 "write": true, 00:18:19.850 "unmap": false, 00:18:19.850 "flush": false, 00:18:19.850 "reset": true, 00:18:19.850 "nvme_admin": false, 00:18:19.850 "nvme_io": false, 00:18:19.850 "nvme_io_md": false, 00:18:19.850 "write_zeroes": true, 00:18:19.850 "zcopy": false, 00:18:19.850 "get_zone_info": false, 00:18:19.850 "zone_management": false, 00:18:19.850 "zone_append": false, 00:18:19.850 "compare": false, 00:18:19.850 "compare_and_write": false, 00:18:19.850 "abort": false, 00:18:19.850 "seek_hole": false, 00:18:19.850 "seek_data": false, 00:18:19.850 "copy": false, 00:18:19.850 "nvme_iov_md": false 00:18:19.850 }, 00:18:19.850 "driver_specific": { 00:18:19.850 "raid": { 00:18:19.850 "uuid": "0bd6dd2b-a0ff-4d57-bc6e-83c2e24df039", 00:18:19.850 "strip_size_kb": 64, 00:18:19.850 "state": "online", 00:18:19.850 "raid_level": "raid5f", 00:18:19.850 "superblock": false, 00:18:19.850 "num_base_bdevs": 4, 00:18:19.850 "num_base_bdevs_discovered": 4, 00:18:19.850 "num_base_bdevs_operational": 4, 00:18:19.850 "base_bdevs_list": [ 00:18:19.850 { 00:18:19.850 "name": "NewBaseBdev", 00:18:19.850 "uuid": "1323b430-242c-4dd9-b6f4-303286fb31ae", 00:18:19.850 "is_configured": true, 00:18:19.850 "data_offset": 0, 00:18:19.850 "data_size": 65536 00:18:19.850 }, 00:18:19.850 { 00:18:19.850 "name": "BaseBdev2", 00:18:19.850 "uuid": "d178a5c4-9e1e-48bf-a7c1-f1f16050b7cd", 00:18:19.850 "is_configured": true, 00:18:19.850 "data_offset": 0, 00:18:19.850 "data_size": 65536 00:18:19.850 }, 00:18:19.850 { 00:18:19.850 "name": "BaseBdev3", 00:18:19.850 "uuid": "a107b79a-319f-4e29-8bcb-7cf671875cc1", 00:18:19.850 "is_configured": true, 00:18:19.850 "data_offset": 0, 00:18:19.850 "data_size": 65536 00:18:19.850 }, 00:18:19.850 { 00:18:19.850 "name": "BaseBdev4", 00:18:19.850 "uuid": "2277d006-87d3-4783-bd3b-c85b5c702f3e", 00:18:19.850 "is_configured": true, 00:18:19.850 "data_offset": 0, 00:18:19.850 "data_size": 65536 00:18:19.850 } 00:18:19.850 ] 00:18:19.850 } 00:18:19.850 } 00:18:19.850 }' 00:18:19.850 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:19.850 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:19.850 BaseBdev2 00:18:19.850 BaseBdev3 00:18:19.850 BaseBdev4' 00:18:19.850 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.850 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:19.850 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:19.850 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:19.850 14:17:56 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.850 14:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.851 14:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.851 14:17:56 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.851 14:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:19.851 14:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:19.851 14:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:19.851 14:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:19.851 14:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.851 14:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.851 14:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.851 14:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.851 14:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:19.851 14:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:19.851 14:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:19.851 14:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:19.851 14:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:19.851 14:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.851 14:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:19.851 14:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.110 14:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:20.110 14:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:20.110 14:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:20.110 14:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:20.110 14:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:20.110 14:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.110 14:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.110 14:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.110 14:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:20.110 14:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:20.110 14:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:20.110 14:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.110 14:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:20.110 [2024-11-27 14:17:57.194744] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:20.110 [2024-11-27 14:17:57.194806] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:20.110 [2024-11-27 14:17:57.194919] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:20.110 [2024-11-27 14:17:57.195297] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:20.110 [2024-11-27 14:17:57.195321] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:20.110 14:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.110 14:17:57 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@326 -- # killprocess 83059 00:18:20.110 14:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@954 -- # '[' -z 83059 ']' 00:18:20.110 14:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@958 -- # kill -0 83059 00:18:20.110 14:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # uname 00:18:20.110 14:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:20.110 14:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83059 00:18:20.110 killing process with pid 83059 00:18:20.110 14:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:20.110 14:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:20.110 14:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83059' 00:18:20.110 14:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@973 -- # kill 83059 00:18:20.110 [2024-11-27 14:17:57.231010] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:20.110 14:17:57 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@978 -- # wait 83059 00:18:20.370 [2024-11-27 14:17:57.604397] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test -- bdev/bdev_raid.sh@328 -- # return 0 00:18:21.748 00:18:21.748 real 0m12.833s 00:18:21.748 user 0m21.268s 00:18:21.748 sys 0m1.846s 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:18:21.748 ************************************ 00:18:21.748 END TEST raid5f_state_function_test 00:18:21.748 ************************************ 00:18:21.748 14:17:58 bdev_raid -- bdev/bdev_raid.sh@987 -- # run_test raid5f_state_function_test_sb raid_state_function_test raid5f 4 true 00:18:21.748 14:17:58 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:18:21.748 14:17:58 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:21.748 14:17:58 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:21.748 ************************************ 00:18:21.748 START TEST raid5f_state_function_test_sb 00:18:21.748 ************************************ 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1129 -- # raid_state_function_test raid5f 4 true 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # local raid_level=raid5f 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=4 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev3 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # echo BaseBdev4 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@211 -- # local strip_size 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # '[' raid5f '!=' raid1 ']' 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@216 -- # strip_size=64 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@217 -- # strip_size_create_arg='-z 64' 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@229 -- # raid_pid=83738 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:18:21.748 Process raid pid: 83738 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 83738' 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # waitforlisten 83738 00:18:21.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@835 -- # '[' -z 83738 ']' 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:21.748 14:17:58 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:21.748 [2024-11-27 14:17:58.810286] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:18:21.748 [2024-11-27 14:17:58.810512] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:21.748 [2024-11-27 14:17:59.002964] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.007 [2024-11-27 14:17:59.166572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.266 [2024-11-27 14:17:59.377636] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:22.266 [2024-11-27 14:17:59.377683] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:22.833 14:17:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:22.833 14:17:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@868 -- # return 0 00:18:22.833 14:17:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:22.833 14:17:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.833 14:17:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.833 [2024-11-27 14:17:59.814865] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:22.833 [2024-11-27 14:17:59.815091] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:22.833 [2024-11-27 14:17:59.815126] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:22.833 [2024-11-27 14:17:59.815144] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:22.833 [2024-11-27 14:17:59.815154] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:22.833 [2024-11-27 14:17:59.815168] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:22.833 [2024-11-27 14:17:59.815177] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:22.833 [2024-11-27 14:17:59.815191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:22.833 14:17:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.833 14:17:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:22.833 14:17:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:22.833 14:17:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:22.833 14:17:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:22.833 14:17:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:22.833 14:17:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:22.833 14:17:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:22.833 14:17:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:22.833 14:17:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:22.833 14:17:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:22.833 14:17:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:22.833 14:17:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:22.833 14:17:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.833 14:17:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:22.833 14:17:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.833 14:17:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:22.833 "name": "Existed_Raid", 00:18:22.833 "uuid": "5114ca9b-931c-4aca-9876-99974a666925", 00:18:22.833 "strip_size_kb": 64, 00:18:22.833 "state": "configuring", 00:18:22.833 "raid_level": "raid5f", 00:18:22.833 "superblock": true, 00:18:22.833 "num_base_bdevs": 4, 00:18:22.833 "num_base_bdevs_discovered": 0, 00:18:22.833 "num_base_bdevs_operational": 4, 00:18:22.833 "base_bdevs_list": [ 00:18:22.833 { 00:18:22.833 "name": "BaseBdev1", 00:18:22.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.833 "is_configured": false, 00:18:22.833 "data_offset": 0, 00:18:22.833 "data_size": 0 00:18:22.833 }, 00:18:22.833 { 00:18:22.833 "name": "BaseBdev2", 00:18:22.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.833 "is_configured": false, 00:18:22.833 "data_offset": 0, 00:18:22.833 "data_size": 0 00:18:22.833 }, 00:18:22.833 { 00:18:22.833 "name": "BaseBdev3", 00:18:22.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.833 "is_configured": false, 00:18:22.833 "data_offset": 0, 00:18:22.833 "data_size": 0 00:18:22.833 }, 00:18:22.833 { 00:18:22.833 "name": "BaseBdev4", 00:18:22.833 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:22.833 "is_configured": false, 00:18:22.833 "data_offset": 0, 00:18:22.833 "data_size": 0 00:18:22.833 } 00:18:22.834 ] 00:18:22.834 }' 00:18:22.834 14:17:59 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:22.834 14:17:59 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.093 14:18:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:23.093 14:18:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.093 14:18:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.093 [2024-11-27 14:18:00.334973] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:23.093 [2024-11-27 14:18:00.335238] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:18:23.093 14:18:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.093 14:18:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:23.093 14:18:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.093 14:18:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.093 [2024-11-27 14:18:00.346975] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:23.093 [2024-11-27 14:18:00.347028] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:23.093 [2024-11-27 14:18:00.347044] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:23.093 [2024-11-27 14:18:00.347082] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:23.093 [2024-11-27 14:18:00.347106] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:23.093 [2024-11-27 14:18:00.347134] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:23.093 [2024-11-27 14:18:00.347143] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:23.093 [2024-11-27 14:18:00.347155] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:23.093 14:18:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.093 14:18:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:23.093 14:18:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.093 14:18:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.353 [2024-11-27 14:18:00.394695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:23.353 BaseBdev1 00:18:23.353 14:18:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.353 14:18:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:18:23.353 14:18:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:23.353 14:18:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:23.353 14:18:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:23.353 14:18:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:23.353 14:18:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:23.353 14:18:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:23.353 14:18:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.353 14:18:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.353 14:18:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.353 14:18:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:23.353 14:18:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.353 14:18:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.353 [ 00:18:23.353 { 00:18:23.353 "name": "BaseBdev1", 00:18:23.353 "aliases": [ 00:18:23.353 "ced1a8ab-0ea7-4237-871f-eb2157bacb97" 00:18:23.353 ], 00:18:23.353 "product_name": "Malloc disk", 00:18:23.353 "block_size": 512, 00:18:23.353 "num_blocks": 65536, 00:18:23.353 "uuid": "ced1a8ab-0ea7-4237-871f-eb2157bacb97", 00:18:23.353 "assigned_rate_limits": { 00:18:23.353 "rw_ios_per_sec": 0, 00:18:23.353 "rw_mbytes_per_sec": 0, 00:18:23.353 "r_mbytes_per_sec": 0, 00:18:23.353 "w_mbytes_per_sec": 0 00:18:23.353 }, 00:18:23.353 "claimed": true, 00:18:23.353 "claim_type": "exclusive_write", 00:18:23.353 "zoned": false, 00:18:23.353 "supported_io_types": { 00:18:23.353 "read": true, 00:18:23.353 "write": true, 00:18:23.353 "unmap": true, 00:18:23.353 "flush": true, 00:18:23.353 "reset": true, 00:18:23.353 "nvme_admin": false, 00:18:23.353 "nvme_io": false, 00:18:23.353 "nvme_io_md": false, 00:18:23.353 "write_zeroes": true, 00:18:23.353 "zcopy": true, 00:18:23.353 "get_zone_info": false, 00:18:23.353 "zone_management": false, 00:18:23.353 "zone_append": false, 00:18:23.353 "compare": false, 00:18:23.353 "compare_and_write": false, 00:18:23.353 "abort": true, 00:18:23.353 "seek_hole": false, 00:18:23.353 "seek_data": false, 00:18:23.353 "copy": true, 00:18:23.353 "nvme_iov_md": false 00:18:23.353 }, 00:18:23.353 "memory_domains": [ 00:18:23.353 { 00:18:23.353 "dma_device_id": "system", 00:18:23.353 "dma_device_type": 1 00:18:23.353 }, 00:18:23.353 { 00:18:23.353 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:23.353 "dma_device_type": 2 00:18:23.353 } 00:18:23.353 ], 00:18:23.353 "driver_specific": {} 00:18:23.353 } 00:18:23.353 ] 00:18:23.353 14:18:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.353 14:18:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:23.353 14:18:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:23.353 14:18:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:23.353 14:18:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:23.353 14:18:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:23.353 14:18:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:23.353 14:18:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:23.353 14:18:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.353 14:18:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.353 14:18:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.354 14:18:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.354 14:18:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.354 14:18:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:23.354 14:18:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.354 14:18:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.354 14:18:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.354 14:18:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.354 "name": "Existed_Raid", 00:18:23.354 "uuid": "93501421-29c9-4474-89f3-06f586ee8877", 00:18:23.354 "strip_size_kb": 64, 00:18:23.354 "state": "configuring", 00:18:23.354 "raid_level": "raid5f", 00:18:23.354 "superblock": true, 00:18:23.354 "num_base_bdevs": 4, 00:18:23.354 "num_base_bdevs_discovered": 1, 00:18:23.354 "num_base_bdevs_operational": 4, 00:18:23.354 "base_bdevs_list": [ 00:18:23.354 { 00:18:23.354 "name": "BaseBdev1", 00:18:23.354 "uuid": "ced1a8ab-0ea7-4237-871f-eb2157bacb97", 00:18:23.354 "is_configured": true, 00:18:23.354 "data_offset": 2048, 00:18:23.354 "data_size": 63488 00:18:23.354 }, 00:18:23.354 { 00:18:23.354 "name": "BaseBdev2", 00:18:23.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.354 "is_configured": false, 00:18:23.354 "data_offset": 0, 00:18:23.354 "data_size": 0 00:18:23.354 }, 00:18:23.354 { 00:18:23.354 "name": "BaseBdev3", 00:18:23.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.354 "is_configured": false, 00:18:23.354 "data_offset": 0, 00:18:23.354 "data_size": 0 00:18:23.354 }, 00:18:23.354 { 00:18:23.354 "name": "BaseBdev4", 00:18:23.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.354 "is_configured": false, 00:18:23.354 "data_offset": 0, 00:18:23.354 "data_size": 0 00:18:23.354 } 00:18:23.354 ] 00:18:23.354 }' 00:18:23.354 14:18:00 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.354 14:18:00 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.922 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:23.922 14:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.923 14:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.923 [2024-11-27 14:18:01.102981] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:23.923 [2024-11-27 14:18:01.103274] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:18:23.923 14:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.923 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:23.923 14:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.923 14:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.923 [2024-11-27 14:18:01.111041] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:23.923 [2024-11-27 14:18:01.113543] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:18:23.923 [2024-11-27 14:18:01.113597] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:18:23.923 [2024-11-27 14:18:01.113614] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:18:23.923 [2024-11-27 14:18:01.113661] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:18:23.923 [2024-11-27 14:18:01.113670] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:18:23.923 [2024-11-27 14:18:01.113699] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:18:23.923 14:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.923 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:18:23.923 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:23.923 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:23.923 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:23.923 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:23.923 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:23.923 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:23.923 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:23.923 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:23.923 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:23.923 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:23.923 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:23.923 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:23.923 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:23.923 14:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.923 14:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:23.923 14:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.923 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:23.923 "name": "Existed_Raid", 00:18:23.923 "uuid": "9a943b3c-9bcb-4ddf-bf5e-a30d1d86170d", 00:18:23.923 "strip_size_kb": 64, 00:18:23.923 "state": "configuring", 00:18:23.923 "raid_level": "raid5f", 00:18:23.923 "superblock": true, 00:18:23.923 "num_base_bdevs": 4, 00:18:23.923 "num_base_bdevs_discovered": 1, 00:18:23.923 "num_base_bdevs_operational": 4, 00:18:23.923 "base_bdevs_list": [ 00:18:23.923 { 00:18:23.923 "name": "BaseBdev1", 00:18:23.923 "uuid": "ced1a8ab-0ea7-4237-871f-eb2157bacb97", 00:18:23.923 "is_configured": true, 00:18:23.923 "data_offset": 2048, 00:18:23.923 "data_size": 63488 00:18:23.923 }, 00:18:23.923 { 00:18:23.923 "name": "BaseBdev2", 00:18:23.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.923 "is_configured": false, 00:18:23.923 "data_offset": 0, 00:18:23.923 "data_size": 0 00:18:23.923 }, 00:18:23.923 { 00:18:23.923 "name": "BaseBdev3", 00:18:23.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.923 "is_configured": false, 00:18:23.923 "data_offset": 0, 00:18:23.923 "data_size": 0 00:18:23.923 }, 00:18:23.923 { 00:18:23.923 "name": "BaseBdev4", 00:18:23.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:23.923 "is_configured": false, 00:18:23.923 "data_offset": 0, 00:18:23.923 "data_size": 0 00:18:23.923 } 00:18:23.923 ] 00:18:23.923 }' 00:18:23.923 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:23.923 14:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.491 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:24.491 14:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.491 14:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.491 [2024-11-27 14:18:01.702012] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:24.491 BaseBdev2 00:18:24.491 14:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.491 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:18:24.491 14:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:24.491 14:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:24.491 14:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:24.491 14:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:24.491 14:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:24.491 14:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:24.491 14:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.491 14:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.491 14:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.491 14:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:24.491 14:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.491 14:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.491 [ 00:18:24.491 { 00:18:24.491 "name": "BaseBdev2", 00:18:24.491 "aliases": [ 00:18:24.491 "b7ab0430-6234-45b6-80aa-1ddc442b03e3" 00:18:24.491 ], 00:18:24.491 "product_name": "Malloc disk", 00:18:24.491 "block_size": 512, 00:18:24.491 "num_blocks": 65536, 00:18:24.491 "uuid": "b7ab0430-6234-45b6-80aa-1ddc442b03e3", 00:18:24.491 "assigned_rate_limits": { 00:18:24.491 "rw_ios_per_sec": 0, 00:18:24.491 "rw_mbytes_per_sec": 0, 00:18:24.491 "r_mbytes_per_sec": 0, 00:18:24.491 "w_mbytes_per_sec": 0 00:18:24.491 }, 00:18:24.491 "claimed": true, 00:18:24.491 "claim_type": "exclusive_write", 00:18:24.491 "zoned": false, 00:18:24.491 "supported_io_types": { 00:18:24.491 "read": true, 00:18:24.491 "write": true, 00:18:24.491 "unmap": true, 00:18:24.491 "flush": true, 00:18:24.491 "reset": true, 00:18:24.491 "nvme_admin": false, 00:18:24.491 "nvme_io": false, 00:18:24.491 "nvme_io_md": false, 00:18:24.491 "write_zeroes": true, 00:18:24.491 "zcopy": true, 00:18:24.491 "get_zone_info": false, 00:18:24.491 "zone_management": false, 00:18:24.491 "zone_append": false, 00:18:24.491 "compare": false, 00:18:24.491 "compare_and_write": false, 00:18:24.491 "abort": true, 00:18:24.491 "seek_hole": false, 00:18:24.491 "seek_data": false, 00:18:24.491 "copy": true, 00:18:24.491 "nvme_iov_md": false 00:18:24.491 }, 00:18:24.491 "memory_domains": [ 00:18:24.491 { 00:18:24.491 "dma_device_id": "system", 00:18:24.491 "dma_device_type": 1 00:18:24.491 }, 00:18:24.491 { 00:18:24.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:24.491 "dma_device_type": 2 00:18:24.491 } 00:18:24.491 ], 00:18:24.491 "driver_specific": {} 00:18:24.491 } 00:18:24.491 ] 00:18:24.492 14:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.492 14:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:24.492 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:24.492 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:24.492 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:24.492 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:24.492 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:24.492 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:24.492 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:24.492 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:24.492 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:24.492 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:24.492 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:24.492 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:24.492 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:24.492 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:24.492 14:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.492 14:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:24.492 14:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.751 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:24.751 "name": "Existed_Raid", 00:18:24.751 "uuid": "9a943b3c-9bcb-4ddf-bf5e-a30d1d86170d", 00:18:24.751 "strip_size_kb": 64, 00:18:24.751 "state": "configuring", 00:18:24.751 "raid_level": "raid5f", 00:18:24.751 "superblock": true, 00:18:24.751 "num_base_bdevs": 4, 00:18:24.751 "num_base_bdevs_discovered": 2, 00:18:24.751 "num_base_bdevs_operational": 4, 00:18:24.751 "base_bdevs_list": [ 00:18:24.751 { 00:18:24.751 "name": "BaseBdev1", 00:18:24.751 "uuid": "ced1a8ab-0ea7-4237-871f-eb2157bacb97", 00:18:24.751 "is_configured": true, 00:18:24.751 "data_offset": 2048, 00:18:24.751 "data_size": 63488 00:18:24.751 }, 00:18:24.751 { 00:18:24.751 "name": "BaseBdev2", 00:18:24.751 "uuid": "b7ab0430-6234-45b6-80aa-1ddc442b03e3", 00:18:24.751 "is_configured": true, 00:18:24.751 "data_offset": 2048, 00:18:24.751 "data_size": 63488 00:18:24.751 }, 00:18:24.751 { 00:18:24.751 "name": "BaseBdev3", 00:18:24.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.751 "is_configured": false, 00:18:24.751 "data_offset": 0, 00:18:24.751 "data_size": 0 00:18:24.751 }, 00:18:24.751 { 00:18:24.751 "name": "BaseBdev4", 00:18:24.751 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:24.751 "is_configured": false, 00:18:24.751 "data_offset": 0, 00:18:24.751 "data_size": 0 00:18:24.751 } 00:18:24.751 ] 00:18:24.751 }' 00:18:24.751 14:18:01 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:24.751 14:18:01 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.010 14:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:25.010 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.010 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.270 [2024-11-27 14:18:02.304150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:25.270 BaseBdev3 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev3 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.270 [ 00:18:25.270 { 00:18:25.270 "name": "BaseBdev3", 00:18:25.270 "aliases": [ 00:18:25.270 "25285e12-732c-47ff-82fc-44d3bb63c315" 00:18:25.270 ], 00:18:25.270 "product_name": "Malloc disk", 00:18:25.270 "block_size": 512, 00:18:25.270 "num_blocks": 65536, 00:18:25.270 "uuid": "25285e12-732c-47ff-82fc-44d3bb63c315", 00:18:25.270 "assigned_rate_limits": { 00:18:25.270 "rw_ios_per_sec": 0, 00:18:25.270 "rw_mbytes_per_sec": 0, 00:18:25.270 "r_mbytes_per_sec": 0, 00:18:25.270 "w_mbytes_per_sec": 0 00:18:25.270 }, 00:18:25.270 "claimed": true, 00:18:25.270 "claim_type": "exclusive_write", 00:18:25.270 "zoned": false, 00:18:25.270 "supported_io_types": { 00:18:25.270 "read": true, 00:18:25.270 "write": true, 00:18:25.270 "unmap": true, 00:18:25.270 "flush": true, 00:18:25.270 "reset": true, 00:18:25.270 "nvme_admin": false, 00:18:25.270 "nvme_io": false, 00:18:25.270 "nvme_io_md": false, 00:18:25.270 "write_zeroes": true, 00:18:25.270 "zcopy": true, 00:18:25.270 "get_zone_info": false, 00:18:25.270 "zone_management": false, 00:18:25.270 "zone_append": false, 00:18:25.270 "compare": false, 00:18:25.270 "compare_and_write": false, 00:18:25.270 "abort": true, 00:18:25.270 "seek_hole": false, 00:18:25.270 "seek_data": false, 00:18:25.270 "copy": true, 00:18:25.270 "nvme_iov_md": false 00:18:25.270 }, 00:18:25.270 "memory_domains": [ 00:18:25.270 { 00:18:25.270 "dma_device_id": "system", 00:18:25.270 "dma_device_type": 1 00:18:25.270 }, 00:18:25.270 { 00:18:25.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.270 "dma_device_type": 2 00:18:25.270 } 00:18:25.270 ], 00:18:25.270 "driver_specific": {} 00:18:25.270 } 00:18:25.270 ] 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.270 "name": "Existed_Raid", 00:18:25.270 "uuid": "9a943b3c-9bcb-4ddf-bf5e-a30d1d86170d", 00:18:25.270 "strip_size_kb": 64, 00:18:25.270 "state": "configuring", 00:18:25.270 "raid_level": "raid5f", 00:18:25.270 "superblock": true, 00:18:25.270 "num_base_bdevs": 4, 00:18:25.270 "num_base_bdevs_discovered": 3, 00:18:25.270 "num_base_bdevs_operational": 4, 00:18:25.270 "base_bdevs_list": [ 00:18:25.270 { 00:18:25.270 "name": "BaseBdev1", 00:18:25.270 "uuid": "ced1a8ab-0ea7-4237-871f-eb2157bacb97", 00:18:25.270 "is_configured": true, 00:18:25.270 "data_offset": 2048, 00:18:25.270 "data_size": 63488 00:18:25.270 }, 00:18:25.270 { 00:18:25.270 "name": "BaseBdev2", 00:18:25.270 "uuid": "b7ab0430-6234-45b6-80aa-1ddc442b03e3", 00:18:25.270 "is_configured": true, 00:18:25.270 "data_offset": 2048, 00:18:25.270 "data_size": 63488 00:18:25.270 }, 00:18:25.270 { 00:18:25.270 "name": "BaseBdev3", 00:18:25.270 "uuid": "25285e12-732c-47ff-82fc-44d3bb63c315", 00:18:25.270 "is_configured": true, 00:18:25.270 "data_offset": 2048, 00:18:25.270 "data_size": 63488 00:18:25.270 }, 00:18:25.270 { 00:18:25.270 "name": "BaseBdev4", 00:18:25.270 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:25.270 "is_configured": false, 00:18:25.270 "data_offset": 0, 00:18:25.270 "data_size": 0 00:18:25.270 } 00:18:25.270 ] 00:18:25.270 }' 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.270 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.840 [2024-11-27 14:18:02.923481] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:25.840 [2024-11-27 14:18:02.923874] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:25.840 [2024-11-27 14:18:02.923894] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:25.840 BaseBdev4 00:18:25.840 [2024-11-27 14:18:02.924242] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev4 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.840 [2024-11-27 14:18:02.931502] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:25.840 [2024-11-27 14:18:02.931564] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:18:25.840 [2024-11-27 14:18:02.931884] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.840 [ 00:18:25.840 { 00:18:25.840 "name": "BaseBdev4", 00:18:25.840 "aliases": [ 00:18:25.840 "fde8d391-47fe-4b4f-84ab-2a29b2eea9e3" 00:18:25.840 ], 00:18:25.840 "product_name": "Malloc disk", 00:18:25.840 "block_size": 512, 00:18:25.840 "num_blocks": 65536, 00:18:25.840 "uuid": "fde8d391-47fe-4b4f-84ab-2a29b2eea9e3", 00:18:25.840 "assigned_rate_limits": { 00:18:25.840 "rw_ios_per_sec": 0, 00:18:25.840 "rw_mbytes_per_sec": 0, 00:18:25.840 "r_mbytes_per_sec": 0, 00:18:25.840 "w_mbytes_per_sec": 0 00:18:25.840 }, 00:18:25.840 "claimed": true, 00:18:25.840 "claim_type": "exclusive_write", 00:18:25.840 "zoned": false, 00:18:25.840 "supported_io_types": { 00:18:25.840 "read": true, 00:18:25.840 "write": true, 00:18:25.840 "unmap": true, 00:18:25.840 "flush": true, 00:18:25.840 "reset": true, 00:18:25.840 "nvme_admin": false, 00:18:25.840 "nvme_io": false, 00:18:25.840 "nvme_io_md": false, 00:18:25.840 "write_zeroes": true, 00:18:25.840 "zcopy": true, 00:18:25.840 "get_zone_info": false, 00:18:25.840 "zone_management": false, 00:18:25.840 "zone_append": false, 00:18:25.840 "compare": false, 00:18:25.840 "compare_and_write": false, 00:18:25.840 "abort": true, 00:18:25.840 "seek_hole": false, 00:18:25.840 "seek_data": false, 00:18:25.840 "copy": true, 00:18:25.840 "nvme_iov_md": false 00:18:25.840 }, 00:18:25.840 "memory_domains": [ 00:18:25.840 { 00:18:25.840 "dma_device_id": "system", 00:18:25.840 "dma_device_type": 1 00:18:25.840 }, 00:18:25.840 { 00:18:25.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:25.840 "dma_device_type": 2 00:18:25.840 } 00:18:25.840 ], 00:18:25.840 "driver_specific": {} 00:18:25.840 } 00:18:25.840 ] 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:25.840 14:18:02 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.840 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:25.840 "name": "Existed_Raid", 00:18:25.840 "uuid": "9a943b3c-9bcb-4ddf-bf5e-a30d1d86170d", 00:18:25.840 "strip_size_kb": 64, 00:18:25.840 "state": "online", 00:18:25.840 "raid_level": "raid5f", 00:18:25.840 "superblock": true, 00:18:25.840 "num_base_bdevs": 4, 00:18:25.840 "num_base_bdevs_discovered": 4, 00:18:25.840 "num_base_bdevs_operational": 4, 00:18:25.840 "base_bdevs_list": [ 00:18:25.840 { 00:18:25.840 "name": "BaseBdev1", 00:18:25.840 "uuid": "ced1a8ab-0ea7-4237-871f-eb2157bacb97", 00:18:25.840 "is_configured": true, 00:18:25.840 "data_offset": 2048, 00:18:25.840 "data_size": 63488 00:18:25.840 }, 00:18:25.840 { 00:18:25.840 "name": "BaseBdev2", 00:18:25.840 "uuid": "b7ab0430-6234-45b6-80aa-1ddc442b03e3", 00:18:25.840 "is_configured": true, 00:18:25.840 "data_offset": 2048, 00:18:25.840 "data_size": 63488 00:18:25.840 }, 00:18:25.840 { 00:18:25.840 "name": "BaseBdev3", 00:18:25.840 "uuid": "25285e12-732c-47ff-82fc-44d3bb63c315", 00:18:25.840 "is_configured": true, 00:18:25.840 "data_offset": 2048, 00:18:25.840 "data_size": 63488 00:18:25.840 }, 00:18:25.840 { 00:18:25.840 "name": "BaseBdev4", 00:18:25.840 "uuid": "fde8d391-47fe-4b4f-84ab-2a29b2eea9e3", 00:18:25.840 "is_configured": true, 00:18:25.840 "data_offset": 2048, 00:18:25.840 "data_size": 63488 00:18:25.840 } 00:18:25.840 ] 00:18:25.840 }' 00:18:25.840 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:25.840 14:18:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.408 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:18:26.408 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:26.408 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:26.408 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:26.408 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:26.408 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:26.408 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:26.408 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:26.408 14:18:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.408 14:18:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.408 [2024-11-27 14:18:03.479319] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:26.408 14:18:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.408 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:26.408 "name": "Existed_Raid", 00:18:26.408 "aliases": [ 00:18:26.408 "9a943b3c-9bcb-4ddf-bf5e-a30d1d86170d" 00:18:26.408 ], 00:18:26.408 "product_name": "Raid Volume", 00:18:26.408 "block_size": 512, 00:18:26.408 "num_blocks": 190464, 00:18:26.408 "uuid": "9a943b3c-9bcb-4ddf-bf5e-a30d1d86170d", 00:18:26.408 "assigned_rate_limits": { 00:18:26.408 "rw_ios_per_sec": 0, 00:18:26.408 "rw_mbytes_per_sec": 0, 00:18:26.408 "r_mbytes_per_sec": 0, 00:18:26.408 "w_mbytes_per_sec": 0 00:18:26.408 }, 00:18:26.408 "claimed": false, 00:18:26.408 "zoned": false, 00:18:26.408 "supported_io_types": { 00:18:26.408 "read": true, 00:18:26.408 "write": true, 00:18:26.408 "unmap": false, 00:18:26.408 "flush": false, 00:18:26.408 "reset": true, 00:18:26.408 "nvme_admin": false, 00:18:26.408 "nvme_io": false, 00:18:26.408 "nvme_io_md": false, 00:18:26.408 "write_zeroes": true, 00:18:26.408 "zcopy": false, 00:18:26.408 "get_zone_info": false, 00:18:26.408 "zone_management": false, 00:18:26.408 "zone_append": false, 00:18:26.408 "compare": false, 00:18:26.408 "compare_and_write": false, 00:18:26.408 "abort": false, 00:18:26.408 "seek_hole": false, 00:18:26.408 "seek_data": false, 00:18:26.408 "copy": false, 00:18:26.408 "nvme_iov_md": false 00:18:26.408 }, 00:18:26.408 "driver_specific": { 00:18:26.408 "raid": { 00:18:26.408 "uuid": "9a943b3c-9bcb-4ddf-bf5e-a30d1d86170d", 00:18:26.408 "strip_size_kb": 64, 00:18:26.408 "state": "online", 00:18:26.408 "raid_level": "raid5f", 00:18:26.408 "superblock": true, 00:18:26.408 "num_base_bdevs": 4, 00:18:26.408 "num_base_bdevs_discovered": 4, 00:18:26.408 "num_base_bdevs_operational": 4, 00:18:26.408 "base_bdevs_list": [ 00:18:26.408 { 00:18:26.408 "name": "BaseBdev1", 00:18:26.408 "uuid": "ced1a8ab-0ea7-4237-871f-eb2157bacb97", 00:18:26.408 "is_configured": true, 00:18:26.408 "data_offset": 2048, 00:18:26.408 "data_size": 63488 00:18:26.408 }, 00:18:26.408 { 00:18:26.408 "name": "BaseBdev2", 00:18:26.408 "uuid": "b7ab0430-6234-45b6-80aa-1ddc442b03e3", 00:18:26.408 "is_configured": true, 00:18:26.408 "data_offset": 2048, 00:18:26.408 "data_size": 63488 00:18:26.408 }, 00:18:26.408 { 00:18:26.408 "name": "BaseBdev3", 00:18:26.408 "uuid": "25285e12-732c-47ff-82fc-44d3bb63c315", 00:18:26.408 "is_configured": true, 00:18:26.408 "data_offset": 2048, 00:18:26.408 "data_size": 63488 00:18:26.408 }, 00:18:26.408 { 00:18:26.408 "name": "BaseBdev4", 00:18:26.408 "uuid": "fde8d391-47fe-4b4f-84ab-2a29b2eea9e3", 00:18:26.408 "is_configured": true, 00:18:26.408 "data_offset": 2048, 00:18:26.408 "data_size": 63488 00:18:26.408 } 00:18:26.408 ] 00:18:26.408 } 00:18:26.408 } 00:18:26.408 }' 00:18:26.408 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:26.408 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:18:26.408 BaseBdev2 00:18:26.408 BaseBdev3 00:18:26.408 BaseBdev4' 00:18:26.408 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.408 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:26.408 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:26.408 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:18:26.409 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.409 14:18:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.409 14:18:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.409 14:18:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.668 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:26.668 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:26.668 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:26.668 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:26.668 14:18:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.668 14:18:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.668 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.668 14:18:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.668 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:26.668 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:26.668 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:26.668 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:26.668 14:18:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.668 14:18:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.668 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.668 14:18:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.668 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:26.668 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:26.668 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:26.668 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:26.668 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:26.668 14:18:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.668 14:18:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.668 14:18:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.668 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:26.668 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:26.668 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:26.668 14:18:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.668 14:18:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.668 [2024-11-27 14:18:03.863234] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:26.928 14:18:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.928 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # local expected_state 00:18:26.928 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@261 -- # has_redundancy raid5f 00:18:26.928 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:26.928 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@199 -- # return 0 00:18:26.928 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:18:26.928 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 3 00:18:26.928 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:26.928 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:26.928 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:26.928 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:26.928 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:26.928 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:26.928 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:26.928 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:26.928 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:26.928 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:26.928 14:18:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.928 14:18:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:26.928 14:18:03 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:26.928 14:18:03 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.928 14:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:26.928 "name": "Existed_Raid", 00:18:26.928 "uuid": "9a943b3c-9bcb-4ddf-bf5e-a30d1d86170d", 00:18:26.928 "strip_size_kb": 64, 00:18:26.928 "state": "online", 00:18:26.928 "raid_level": "raid5f", 00:18:26.928 "superblock": true, 00:18:26.928 "num_base_bdevs": 4, 00:18:26.928 "num_base_bdevs_discovered": 3, 00:18:26.928 "num_base_bdevs_operational": 3, 00:18:26.928 "base_bdevs_list": [ 00:18:26.928 { 00:18:26.928 "name": null, 00:18:26.928 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:26.928 "is_configured": false, 00:18:26.928 "data_offset": 0, 00:18:26.928 "data_size": 63488 00:18:26.928 }, 00:18:26.928 { 00:18:26.928 "name": "BaseBdev2", 00:18:26.928 "uuid": "b7ab0430-6234-45b6-80aa-1ddc442b03e3", 00:18:26.928 "is_configured": true, 00:18:26.928 "data_offset": 2048, 00:18:26.928 "data_size": 63488 00:18:26.928 }, 00:18:26.928 { 00:18:26.928 "name": "BaseBdev3", 00:18:26.928 "uuid": "25285e12-732c-47ff-82fc-44d3bb63c315", 00:18:26.928 "is_configured": true, 00:18:26.928 "data_offset": 2048, 00:18:26.928 "data_size": 63488 00:18:26.928 }, 00:18:26.928 { 00:18:26.928 "name": "BaseBdev4", 00:18:26.928 "uuid": "fde8d391-47fe-4b4f-84ab-2a29b2eea9e3", 00:18:26.928 "is_configured": true, 00:18:26.928 "data_offset": 2048, 00:18:26.928 "data_size": 63488 00:18:26.928 } 00:18:26.928 ] 00:18:26.928 }' 00:18:26.928 14:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:26.928 14:18:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.495 14:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:18:27.495 14:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:27.495 14:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.495 14:18:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.495 14:18:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.495 14:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:27.495 14:18:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.495 14:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:27.495 14:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:27.495 14:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:18:27.495 14:18:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.495 14:18:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.495 [2024-11-27 14:18:04.524706] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:27.495 [2024-11-27 14:18:04.524957] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:27.495 [2024-11-27 14:18:04.613869] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:27.495 14:18:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.496 14:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:27.496 14:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:27.496 14:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:27.496 14:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.496 14:18:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.496 14:18:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.496 14:18:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.496 14:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:27.496 14:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:27.496 14:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev3 00:18:27.496 14:18:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.496 14:18:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.496 [2024-11-27 14:18:04.669964] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:27.754 14:18:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.754 14:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:27.754 14:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:27.754 14:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.754 14:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:18:27.754 14:18:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.754 14:18:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.754 14:18:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.754 14:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:18:27.754 14:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:18:27.754 14:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev4 00:18:27.754 14:18:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.754 14:18:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.754 [2024-11-27 14:18:04.845244] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:18:27.754 [2024-11-27 14:18:04.845324] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:18:27.754 14:18:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.754 14:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:18:27.754 14:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:18:27.754 14:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:18:27.754 14:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:27.754 14:18:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.754 14:18:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:27.754 14:18:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.754 14:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:18:27.754 14:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:18:27.754 14:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@284 -- # '[' 4 -gt 2 ']' 00:18:27.754 14:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i = 1 )) 00:18:27.754 14:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:27.754 14:18:04 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2 00:18:27.754 14:18:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.754 14:18:04 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.013 BaseBdev2 00:18:28.013 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.013 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev2 00:18:28.013 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:18:28.013 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:28.013 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:28.013 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:28.013 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:28.013 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:28.013 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.013 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.013 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.013 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:18:28.013 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.014 [ 00:18:28.014 { 00:18:28.014 "name": "BaseBdev2", 00:18:28.014 "aliases": [ 00:18:28.014 "0c43bf93-41d4-44ae-8654-978bde281cc2" 00:18:28.014 ], 00:18:28.014 "product_name": "Malloc disk", 00:18:28.014 "block_size": 512, 00:18:28.014 "num_blocks": 65536, 00:18:28.014 "uuid": "0c43bf93-41d4-44ae-8654-978bde281cc2", 00:18:28.014 "assigned_rate_limits": { 00:18:28.014 "rw_ios_per_sec": 0, 00:18:28.014 "rw_mbytes_per_sec": 0, 00:18:28.014 "r_mbytes_per_sec": 0, 00:18:28.014 "w_mbytes_per_sec": 0 00:18:28.014 }, 00:18:28.014 "claimed": false, 00:18:28.014 "zoned": false, 00:18:28.014 "supported_io_types": { 00:18:28.014 "read": true, 00:18:28.014 "write": true, 00:18:28.014 "unmap": true, 00:18:28.014 "flush": true, 00:18:28.014 "reset": true, 00:18:28.014 "nvme_admin": false, 00:18:28.014 "nvme_io": false, 00:18:28.014 "nvme_io_md": false, 00:18:28.014 "write_zeroes": true, 00:18:28.014 "zcopy": true, 00:18:28.014 "get_zone_info": false, 00:18:28.014 "zone_management": false, 00:18:28.014 "zone_append": false, 00:18:28.014 "compare": false, 00:18:28.014 "compare_and_write": false, 00:18:28.014 "abort": true, 00:18:28.014 "seek_hole": false, 00:18:28.014 "seek_data": false, 00:18:28.014 "copy": true, 00:18:28.014 "nvme_iov_md": false 00:18:28.014 }, 00:18:28.014 "memory_domains": [ 00:18:28.014 { 00:18:28.014 "dma_device_id": "system", 00:18:28.014 "dma_device_type": 1 00:18:28.014 }, 00:18:28.014 { 00:18:28.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.014 "dma_device_type": 2 00:18:28.014 } 00:18:28.014 ], 00:18:28.014 "driver_specific": {} 00:18:28.014 } 00:18:28.014 ] 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.014 BaseBdev3 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev3 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev3 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 -t 2000 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.014 [ 00:18:28.014 { 00:18:28.014 "name": "BaseBdev3", 00:18:28.014 "aliases": [ 00:18:28.014 "8d575118-7bc4-42d1-9873-344544f88cce" 00:18:28.014 ], 00:18:28.014 "product_name": "Malloc disk", 00:18:28.014 "block_size": 512, 00:18:28.014 "num_blocks": 65536, 00:18:28.014 "uuid": "8d575118-7bc4-42d1-9873-344544f88cce", 00:18:28.014 "assigned_rate_limits": { 00:18:28.014 "rw_ios_per_sec": 0, 00:18:28.014 "rw_mbytes_per_sec": 0, 00:18:28.014 "r_mbytes_per_sec": 0, 00:18:28.014 "w_mbytes_per_sec": 0 00:18:28.014 }, 00:18:28.014 "claimed": false, 00:18:28.014 "zoned": false, 00:18:28.014 "supported_io_types": { 00:18:28.014 "read": true, 00:18:28.014 "write": true, 00:18:28.014 "unmap": true, 00:18:28.014 "flush": true, 00:18:28.014 "reset": true, 00:18:28.014 "nvme_admin": false, 00:18:28.014 "nvme_io": false, 00:18:28.014 "nvme_io_md": false, 00:18:28.014 "write_zeroes": true, 00:18:28.014 "zcopy": true, 00:18:28.014 "get_zone_info": false, 00:18:28.014 "zone_management": false, 00:18:28.014 "zone_append": false, 00:18:28.014 "compare": false, 00:18:28.014 "compare_and_write": false, 00:18:28.014 "abort": true, 00:18:28.014 "seek_hole": false, 00:18:28.014 "seek_data": false, 00:18:28.014 "copy": true, 00:18:28.014 "nvme_iov_md": false 00:18:28.014 }, 00:18:28.014 "memory_domains": [ 00:18:28.014 { 00:18:28.014 "dma_device_id": "system", 00:18:28.014 "dma_device_type": 1 00:18:28.014 }, 00:18:28.014 { 00:18:28.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.014 "dma_device_type": 2 00:18:28.014 } 00:18:28.014 ], 00:18:28.014 "driver_specific": {} 00:18:28.014 } 00:18:28.014 ] 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.014 BaseBdev4 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@288 -- # waitforbdev BaseBdev4 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev4 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 -t 2000 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.014 [ 00:18:28.014 { 00:18:28.014 "name": "BaseBdev4", 00:18:28.014 "aliases": [ 00:18:28.014 "69dfa80f-942f-4d7e-9757-a861b47df19f" 00:18:28.014 ], 00:18:28.014 "product_name": "Malloc disk", 00:18:28.014 "block_size": 512, 00:18:28.014 "num_blocks": 65536, 00:18:28.014 "uuid": "69dfa80f-942f-4d7e-9757-a861b47df19f", 00:18:28.014 "assigned_rate_limits": { 00:18:28.014 "rw_ios_per_sec": 0, 00:18:28.014 "rw_mbytes_per_sec": 0, 00:18:28.014 "r_mbytes_per_sec": 0, 00:18:28.014 "w_mbytes_per_sec": 0 00:18:28.014 }, 00:18:28.014 "claimed": false, 00:18:28.014 "zoned": false, 00:18:28.014 "supported_io_types": { 00:18:28.014 "read": true, 00:18:28.014 "write": true, 00:18:28.014 "unmap": true, 00:18:28.014 "flush": true, 00:18:28.014 "reset": true, 00:18:28.014 "nvme_admin": false, 00:18:28.014 "nvme_io": false, 00:18:28.014 "nvme_io_md": false, 00:18:28.014 "write_zeroes": true, 00:18:28.014 "zcopy": true, 00:18:28.014 "get_zone_info": false, 00:18:28.014 "zone_management": false, 00:18:28.014 "zone_append": false, 00:18:28.014 "compare": false, 00:18:28.014 "compare_and_write": false, 00:18:28.014 "abort": true, 00:18:28.014 "seek_hole": false, 00:18:28.014 "seek_data": false, 00:18:28.014 "copy": true, 00:18:28.014 "nvme_iov_md": false 00:18:28.014 }, 00:18:28.014 "memory_domains": [ 00:18:28.014 { 00:18:28.014 "dma_device_id": "system", 00:18:28.014 "dma_device_type": 1 00:18:28.014 }, 00:18:28.014 { 00:18:28.014 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:28.014 "dma_device_type": 2 00:18:28.014 } 00:18:28.014 ], 00:18:28.014 "driver_specific": {} 00:18:28.014 } 00:18:28.014 ] 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i++ )) 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # (( i < num_base_bdevs )) 00:18:28.014 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@290 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n Existed_Raid 00:18:28.015 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.015 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.015 [2024-11-27 14:18:05.221866] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:18:28.015 [2024-11-27 14:18:05.222049] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:18:28.015 [2024-11-27 14:18:05.222222] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:28.015 [2024-11-27 14:18:05.224654] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:28.015 [2024-11-27 14:18:05.224726] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:28.015 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.015 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:28.015 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:28.015 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:28.015 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:28.015 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:28.015 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:28.015 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.015 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.015 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.015 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.015 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.015 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.015 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.015 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:28.015 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.015 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.015 "name": "Existed_Raid", 00:18:28.015 "uuid": "d65973fc-6958-410c-bff7-42de6cd77392", 00:18:28.015 "strip_size_kb": 64, 00:18:28.015 "state": "configuring", 00:18:28.015 "raid_level": "raid5f", 00:18:28.015 "superblock": true, 00:18:28.015 "num_base_bdevs": 4, 00:18:28.015 "num_base_bdevs_discovered": 3, 00:18:28.015 "num_base_bdevs_operational": 4, 00:18:28.015 "base_bdevs_list": [ 00:18:28.015 { 00:18:28.015 "name": "BaseBdev1", 00:18:28.015 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.015 "is_configured": false, 00:18:28.015 "data_offset": 0, 00:18:28.015 "data_size": 0 00:18:28.015 }, 00:18:28.015 { 00:18:28.015 "name": "BaseBdev2", 00:18:28.015 "uuid": "0c43bf93-41d4-44ae-8654-978bde281cc2", 00:18:28.015 "is_configured": true, 00:18:28.015 "data_offset": 2048, 00:18:28.015 "data_size": 63488 00:18:28.015 }, 00:18:28.015 { 00:18:28.015 "name": "BaseBdev3", 00:18:28.015 "uuid": "8d575118-7bc4-42d1-9873-344544f88cce", 00:18:28.015 "is_configured": true, 00:18:28.015 "data_offset": 2048, 00:18:28.015 "data_size": 63488 00:18:28.015 }, 00:18:28.015 { 00:18:28.015 "name": "BaseBdev4", 00:18:28.015 "uuid": "69dfa80f-942f-4d7e-9757-a861b47df19f", 00:18:28.015 "is_configured": true, 00:18:28.015 "data_offset": 2048, 00:18:28.015 "data_size": 63488 00:18:28.015 } 00:18:28.015 ] 00:18:28.015 }' 00:18:28.015 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.015 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.582 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev2 00:18:28.582 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.582 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.582 [2024-11-27 14:18:05.758123] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:18:28.582 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.582 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:28.582 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:28.582 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:28.582 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:28.582 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:28.582 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:28.582 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:28.582 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:28.582 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:28.582 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:28.582 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:28.582 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:28.582 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.582 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:28.582 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.582 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:28.582 "name": "Existed_Raid", 00:18:28.582 "uuid": "d65973fc-6958-410c-bff7-42de6cd77392", 00:18:28.582 "strip_size_kb": 64, 00:18:28.582 "state": "configuring", 00:18:28.582 "raid_level": "raid5f", 00:18:28.582 "superblock": true, 00:18:28.582 "num_base_bdevs": 4, 00:18:28.582 "num_base_bdevs_discovered": 2, 00:18:28.582 "num_base_bdevs_operational": 4, 00:18:28.582 "base_bdevs_list": [ 00:18:28.582 { 00:18:28.582 "name": "BaseBdev1", 00:18:28.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:28.582 "is_configured": false, 00:18:28.582 "data_offset": 0, 00:18:28.582 "data_size": 0 00:18:28.582 }, 00:18:28.582 { 00:18:28.582 "name": null, 00:18:28.582 "uuid": "0c43bf93-41d4-44ae-8654-978bde281cc2", 00:18:28.582 "is_configured": false, 00:18:28.582 "data_offset": 0, 00:18:28.582 "data_size": 63488 00:18:28.582 }, 00:18:28.582 { 00:18:28.582 "name": "BaseBdev3", 00:18:28.582 "uuid": "8d575118-7bc4-42d1-9873-344544f88cce", 00:18:28.582 "is_configured": true, 00:18:28.582 "data_offset": 2048, 00:18:28.582 "data_size": 63488 00:18:28.582 }, 00:18:28.582 { 00:18:28.583 "name": "BaseBdev4", 00:18:28.583 "uuid": "69dfa80f-942f-4d7e-9757-a861b47df19f", 00:18:28.583 "is_configured": true, 00:18:28.583 "data_offset": 2048, 00:18:28.583 "data_size": 63488 00:18:28.583 } 00:18:28.583 ] 00:18:28.583 }' 00:18:28.583 14:18:05 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:28.583 14:18:05 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.151 14:18:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.151 14:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.151 14:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.151 14:18:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:29.151 14:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.151 14:18:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@295 -- # [[ false == \f\a\l\s\e ]] 00:18:29.151 14:18:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@297 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1 00:18:29.151 14:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.151 14:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.151 [2024-11-27 14:18:06.384978] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:29.151 BaseBdev1 00:18:29.151 14:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.151 14:18:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@298 -- # waitforbdev BaseBdev1 00:18:29.151 14:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:18:29.151 14:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:29.151 14:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:29.151 14:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:29.151 14:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:29.151 14:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:29.151 14:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.151 14:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.151 14:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.151 14:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:18:29.151 14:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.151 14:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.151 [ 00:18:29.151 { 00:18:29.151 "name": "BaseBdev1", 00:18:29.151 "aliases": [ 00:18:29.151 "ce39ce46-e1cb-4f00-ad48-5103d8676d17" 00:18:29.151 ], 00:18:29.151 "product_name": "Malloc disk", 00:18:29.151 "block_size": 512, 00:18:29.151 "num_blocks": 65536, 00:18:29.151 "uuid": "ce39ce46-e1cb-4f00-ad48-5103d8676d17", 00:18:29.151 "assigned_rate_limits": { 00:18:29.151 "rw_ios_per_sec": 0, 00:18:29.151 "rw_mbytes_per_sec": 0, 00:18:29.151 "r_mbytes_per_sec": 0, 00:18:29.151 "w_mbytes_per_sec": 0 00:18:29.151 }, 00:18:29.151 "claimed": true, 00:18:29.151 "claim_type": "exclusive_write", 00:18:29.151 "zoned": false, 00:18:29.151 "supported_io_types": { 00:18:29.151 "read": true, 00:18:29.151 "write": true, 00:18:29.151 "unmap": true, 00:18:29.151 "flush": true, 00:18:29.151 "reset": true, 00:18:29.151 "nvme_admin": false, 00:18:29.151 "nvme_io": false, 00:18:29.151 "nvme_io_md": false, 00:18:29.151 "write_zeroes": true, 00:18:29.151 "zcopy": true, 00:18:29.151 "get_zone_info": false, 00:18:29.151 "zone_management": false, 00:18:29.151 "zone_append": false, 00:18:29.151 "compare": false, 00:18:29.151 "compare_and_write": false, 00:18:29.151 "abort": true, 00:18:29.151 "seek_hole": false, 00:18:29.151 "seek_data": false, 00:18:29.151 "copy": true, 00:18:29.151 "nvme_iov_md": false 00:18:29.151 }, 00:18:29.151 "memory_domains": [ 00:18:29.151 { 00:18:29.151 "dma_device_id": "system", 00:18:29.151 "dma_device_type": 1 00:18:29.151 }, 00:18:29.151 { 00:18:29.151 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:29.151 "dma_device_type": 2 00:18:29.152 } 00:18:29.152 ], 00:18:29.152 "driver_specific": {} 00:18:29.152 } 00:18:29.152 ] 00:18:29.152 14:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.152 14:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:29.152 14:18:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:29.152 14:18:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:29.152 14:18:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:29.152 14:18:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:29.152 14:18:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:29.152 14:18:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:29.152 14:18:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.152 14:18:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.152 14:18:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.152 14:18:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.152 14:18:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.152 14:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.152 14:18:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.152 14:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.411 14:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.412 14:18:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.412 "name": "Existed_Raid", 00:18:29.412 "uuid": "d65973fc-6958-410c-bff7-42de6cd77392", 00:18:29.412 "strip_size_kb": 64, 00:18:29.412 "state": "configuring", 00:18:29.412 "raid_level": "raid5f", 00:18:29.412 "superblock": true, 00:18:29.412 "num_base_bdevs": 4, 00:18:29.412 "num_base_bdevs_discovered": 3, 00:18:29.412 "num_base_bdevs_operational": 4, 00:18:29.412 "base_bdevs_list": [ 00:18:29.412 { 00:18:29.412 "name": "BaseBdev1", 00:18:29.412 "uuid": "ce39ce46-e1cb-4f00-ad48-5103d8676d17", 00:18:29.412 "is_configured": true, 00:18:29.412 "data_offset": 2048, 00:18:29.412 "data_size": 63488 00:18:29.412 }, 00:18:29.412 { 00:18:29.412 "name": null, 00:18:29.412 "uuid": "0c43bf93-41d4-44ae-8654-978bde281cc2", 00:18:29.412 "is_configured": false, 00:18:29.412 "data_offset": 0, 00:18:29.412 "data_size": 63488 00:18:29.412 }, 00:18:29.412 { 00:18:29.412 "name": "BaseBdev3", 00:18:29.412 "uuid": "8d575118-7bc4-42d1-9873-344544f88cce", 00:18:29.412 "is_configured": true, 00:18:29.412 "data_offset": 2048, 00:18:29.412 "data_size": 63488 00:18:29.412 }, 00:18:29.412 { 00:18:29.412 "name": "BaseBdev4", 00:18:29.412 "uuid": "69dfa80f-942f-4d7e-9757-a861b47df19f", 00:18:29.412 "is_configured": true, 00:18:29.412 "data_offset": 2048, 00:18:29.412 "data_size": 63488 00:18:29.412 } 00:18:29.412 ] 00:18:29.412 }' 00:18:29.412 14:18:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.412 14:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.979 14:18:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.979 14:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.979 14:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.979 14:18:06 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:29.979 14:18:06 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.979 14:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@300 -- # [[ true == \t\r\u\e ]] 00:18:29.979 14:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev3 00:18:29.979 14:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.979 14:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.979 [2024-11-27 14:18:07.029295] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:18:29.979 14:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.979 14:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:29.979 14:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:29.979 14:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:29.979 14:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:29.979 14:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:29.979 14:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:29.979 14:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:29.979 14:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:29.979 14:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:29.979 14:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:29.979 14:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:29.979 14:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:29.979 14:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.979 14:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:29.979 14:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.979 14:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:29.979 "name": "Existed_Raid", 00:18:29.979 "uuid": "d65973fc-6958-410c-bff7-42de6cd77392", 00:18:29.979 "strip_size_kb": 64, 00:18:29.979 "state": "configuring", 00:18:29.979 "raid_level": "raid5f", 00:18:29.979 "superblock": true, 00:18:29.979 "num_base_bdevs": 4, 00:18:29.979 "num_base_bdevs_discovered": 2, 00:18:29.979 "num_base_bdevs_operational": 4, 00:18:29.979 "base_bdevs_list": [ 00:18:29.979 { 00:18:29.979 "name": "BaseBdev1", 00:18:29.979 "uuid": "ce39ce46-e1cb-4f00-ad48-5103d8676d17", 00:18:29.979 "is_configured": true, 00:18:29.979 "data_offset": 2048, 00:18:29.979 "data_size": 63488 00:18:29.979 }, 00:18:29.979 { 00:18:29.979 "name": null, 00:18:29.979 "uuid": "0c43bf93-41d4-44ae-8654-978bde281cc2", 00:18:29.979 "is_configured": false, 00:18:29.979 "data_offset": 0, 00:18:29.979 "data_size": 63488 00:18:29.979 }, 00:18:29.979 { 00:18:29.979 "name": null, 00:18:29.979 "uuid": "8d575118-7bc4-42d1-9873-344544f88cce", 00:18:29.979 "is_configured": false, 00:18:29.979 "data_offset": 0, 00:18:29.979 "data_size": 63488 00:18:29.979 }, 00:18:29.979 { 00:18:29.979 "name": "BaseBdev4", 00:18:29.979 "uuid": "69dfa80f-942f-4d7e-9757-a861b47df19f", 00:18:29.979 "is_configured": true, 00:18:29.979 "data_offset": 2048, 00:18:29.979 "data_size": 63488 00:18:29.979 } 00:18:29.979 ] 00:18:29.979 }' 00:18:29.979 14:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:29.979 14:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.547 14:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:30.547 14:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.547 14:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.547 14:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.547 14:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.547 14:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@304 -- # [[ false == \f\a\l\s\e ]] 00:18:30.547 14:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:18:30.547 14:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.547 14:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.547 [2024-11-27 14:18:07.617460] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:30.547 14:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.547 14:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@307 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:30.547 14:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:30.547 14:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:30.547 14:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:30.547 14:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:30.547 14:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:30.547 14:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:30.547 14:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:30.547 14:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:30.547 14:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:30.547 14:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:30.547 14:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:30.547 14:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.547 14:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:30.547 14:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.547 14:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:30.547 "name": "Existed_Raid", 00:18:30.547 "uuid": "d65973fc-6958-410c-bff7-42de6cd77392", 00:18:30.547 "strip_size_kb": 64, 00:18:30.547 "state": "configuring", 00:18:30.547 "raid_level": "raid5f", 00:18:30.547 "superblock": true, 00:18:30.547 "num_base_bdevs": 4, 00:18:30.547 "num_base_bdevs_discovered": 3, 00:18:30.547 "num_base_bdevs_operational": 4, 00:18:30.547 "base_bdevs_list": [ 00:18:30.547 { 00:18:30.547 "name": "BaseBdev1", 00:18:30.547 "uuid": "ce39ce46-e1cb-4f00-ad48-5103d8676d17", 00:18:30.547 "is_configured": true, 00:18:30.547 "data_offset": 2048, 00:18:30.547 "data_size": 63488 00:18:30.547 }, 00:18:30.547 { 00:18:30.547 "name": null, 00:18:30.547 "uuid": "0c43bf93-41d4-44ae-8654-978bde281cc2", 00:18:30.547 "is_configured": false, 00:18:30.547 "data_offset": 0, 00:18:30.547 "data_size": 63488 00:18:30.547 }, 00:18:30.547 { 00:18:30.547 "name": "BaseBdev3", 00:18:30.547 "uuid": "8d575118-7bc4-42d1-9873-344544f88cce", 00:18:30.547 "is_configured": true, 00:18:30.547 "data_offset": 2048, 00:18:30.547 "data_size": 63488 00:18:30.547 }, 00:18:30.547 { 00:18:30.547 "name": "BaseBdev4", 00:18:30.547 "uuid": "69dfa80f-942f-4d7e-9757-a861b47df19f", 00:18:30.547 "is_configured": true, 00:18:30.547 "data_offset": 2048, 00:18:30.547 "data_size": 63488 00:18:30.547 } 00:18:30.547 ] 00:18:30.547 }' 00:18:30.547 14:18:07 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:30.547 14:18:07 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.141 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.141 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:18:31.141 14:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.142 14:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.142 14:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.142 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # [[ true == \t\r\u\e ]] 00:18:31.142 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:18:31.142 14:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.142 14:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.142 [2024-11-27 14:18:08.205734] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:31.142 14:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.142 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@311 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:31.142 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:31.142 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:31.142 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:31.142 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:31.142 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:31.142 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.142 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.142 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.142 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.142 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.142 14:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.142 14:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.142 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.142 14:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.142 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.142 "name": "Existed_Raid", 00:18:31.142 "uuid": "d65973fc-6958-410c-bff7-42de6cd77392", 00:18:31.142 "strip_size_kb": 64, 00:18:31.142 "state": "configuring", 00:18:31.142 "raid_level": "raid5f", 00:18:31.142 "superblock": true, 00:18:31.142 "num_base_bdevs": 4, 00:18:31.142 "num_base_bdevs_discovered": 2, 00:18:31.142 "num_base_bdevs_operational": 4, 00:18:31.142 "base_bdevs_list": [ 00:18:31.142 { 00:18:31.142 "name": null, 00:18:31.142 "uuid": "ce39ce46-e1cb-4f00-ad48-5103d8676d17", 00:18:31.142 "is_configured": false, 00:18:31.142 "data_offset": 0, 00:18:31.142 "data_size": 63488 00:18:31.142 }, 00:18:31.142 { 00:18:31.142 "name": null, 00:18:31.142 "uuid": "0c43bf93-41d4-44ae-8654-978bde281cc2", 00:18:31.142 "is_configured": false, 00:18:31.142 "data_offset": 0, 00:18:31.142 "data_size": 63488 00:18:31.142 }, 00:18:31.142 { 00:18:31.142 "name": "BaseBdev3", 00:18:31.142 "uuid": "8d575118-7bc4-42d1-9873-344544f88cce", 00:18:31.142 "is_configured": true, 00:18:31.142 "data_offset": 2048, 00:18:31.142 "data_size": 63488 00:18:31.142 }, 00:18:31.142 { 00:18:31.142 "name": "BaseBdev4", 00:18:31.142 "uuid": "69dfa80f-942f-4d7e-9757-a861b47df19f", 00:18:31.142 "is_configured": true, 00:18:31.142 "data_offset": 2048, 00:18:31.142 "data_size": 63488 00:18:31.142 } 00:18:31.142 ] 00:18:31.142 }' 00:18:31.142 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.142 14:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.710 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.710 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:18:31.710 14:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.710 14:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.710 14:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.710 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # [[ false == \f\a\l\s\e ]] 00:18:31.710 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # rpc_cmd bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:18:31.710 14:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.710 14:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.710 [2024-11-27 14:18:08.870378] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:31.710 14:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.710 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # verify_raid_bdev_state Existed_Raid configuring raid5f 64 4 00:18:31.710 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:31.710 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:31.710 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:31.710 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:31.710 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:31.710 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:31.710 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:31.710 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:31.710 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:31.710 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:31.710 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:31.710 14:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.710 14:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:31.710 14:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.710 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:31.710 "name": "Existed_Raid", 00:18:31.710 "uuid": "d65973fc-6958-410c-bff7-42de6cd77392", 00:18:31.710 "strip_size_kb": 64, 00:18:31.710 "state": "configuring", 00:18:31.710 "raid_level": "raid5f", 00:18:31.710 "superblock": true, 00:18:31.710 "num_base_bdevs": 4, 00:18:31.710 "num_base_bdevs_discovered": 3, 00:18:31.710 "num_base_bdevs_operational": 4, 00:18:31.710 "base_bdevs_list": [ 00:18:31.710 { 00:18:31.710 "name": null, 00:18:31.710 "uuid": "ce39ce46-e1cb-4f00-ad48-5103d8676d17", 00:18:31.710 "is_configured": false, 00:18:31.710 "data_offset": 0, 00:18:31.710 "data_size": 63488 00:18:31.710 }, 00:18:31.710 { 00:18:31.710 "name": "BaseBdev2", 00:18:31.710 "uuid": "0c43bf93-41d4-44ae-8654-978bde281cc2", 00:18:31.710 "is_configured": true, 00:18:31.710 "data_offset": 2048, 00:18:31.710 "data_size": 63488 00:18:31.710 }, 00:18:31.710 { 00:18:31.710 "name": "BaseBdev3", 00:18:31.710 "uuid": "8d575118-7bc4-42d1-9873-344544f88cce", 00:18:31.710 "is_configured": true, 00:18:31.710 "data_offset": 2048, 00:18:31.710 "data_size": 63488 00:18:31.710 }, 00:18:31.710 { 00:18:31.710 "name": "BaseBdev4", 00:18:31.710 "uuid": "69dfa80f-942f-4d7e-9757-a861b47df19f", 00:18:31.710 "is_configured": true, 00:18:31.710 "data_offset": 2048, 00:18:31.710 "data_size": 63488 00:18:31.710 } 00:18:31.710 ] 00:18:31.710 }' 00:18:31.710 14:18:08 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:31.710 14:18:08 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.279 14:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.279 14:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.279 14:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.279 14:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:18:32.279 14:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.279 14:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@316 -- # [[ true == \t\r\u\e ]] 00:18:32.279 14:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.279 14:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:18:32.279 14:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.279 14:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.279 14:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.279 14:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # rpc_cmd bdev_malloc_create 32 512 -b NewBaseBdev -u ce39ce46-e1cb-4f00-ad48-5103d8676d17 00:18:32.279 14:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.279 14:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.538 [2024-11-27 14:18:09.557067] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:18:32.538 [2024-11-27 14:18:09.557423] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:32.538 [2024-11-27 14:18:09.557442] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:32.538 NewBaseBdev 00:18:32.538 [2024-11-27 14:18:09.557758] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000063c0 00:18:32.538 14:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.538 14:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # waitforbdev NewBaseBdev 00:18:32.538 14:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@903 -- # local bdev_name=NewBaseBdev 00:18:32.538 14:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:18:32.538 14:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@905 -- # local i 00:18:32.538 14:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:18:32.538 14:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:18:32.538 14:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:18:32.538 14:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.538 14:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.538 [2024-11-27 14:18:09.564330] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:32.538 [2024-11-27 14:18:09.564506] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000008200 00:18:32.538 [2024-11-27 14:18:09.564993] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:32.538 14:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.538 14:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev -t 2000 00:18:32.538 14:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.538 14:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.538 [ 00:18:32.538 { 00:18:32.538 "name": "NewBaseBdev", 00:18:32.538 "aliases": [ 00:18:32.538 "ce39ce46-e1cb-4f00-ad48-5103d8676d17" 00:18:32.538 ], 00:18:32.538 "product_name": "Malloc disk", 00:18:32.538 "block_size": 512, 00:18:32.539 "num_blocks": 65536, 00:18:32.539 "uuid": "ce39ce46-e1cb-4f00-ad48-5103d8676d17", 00:18:32.539 "assigned_rate_limits": { 00:18:32.539 "rw_ios_per_sec": 0, 00:18:32.539 "rw_mbytes_per_sec": 0, 00:18:32.539 "r_mbytes_per_sec": 0, 00:18:32.539 "w_mbytes_per_sec": 0 00:18:32.539 }, 00:18:32.539 "claimed": true, 00:18:32.539 "claim_type": "exclusive_write", 00:18:32.539 "zoned": false, 00:18:32.539 "supported_io_types": { 00:18:32.539 "read": true, 00:18:32.539 "write": true, 00:18:32.539 "unmap": true, 00:18:32.539 "flush": true, 00:18:32.539 "reset": true, 00:18:32.539 "nvme_admin": false, 00:18:32.539 "nvme_io": false, 00:18:32.539 "nvme_io_md": false, 00:18:32.539 "write_zeroes": true, 00:18:32.539 "zcopy": true, 00:18:32.539 "get_zone_info": false, 00:18:32.539 "zone_management": false, 00:18:32.539 "zone_append": false, 00:18:32.539 "compare": false, 00:18:32.539 "compare_and_write": false, 00:18:32.539 "abort": true, 00:18:32.539 "seek_hole": false, 00:18:32.539 "seek_data": false, 00:18:32.539 "copy": true, 00:18:32.539 "nvme_iov_md": false 00:18:32.539 }, 00:18:32.539 "memory_domains": [ 00:18:32.539 { 00:18:32.539 "dma_device_id": "system", 00:18:32.539 "dma_device_type": 1 00:18:32.539 }, 00:18:32.539 { 00:18:32.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:18:32.539 "dma_device_type": 2 00:18:32.539 } 00:18:32.539 ], 00:18:32.539 "driver_specific": {} 00:18:32.539 } 00:18:32.539 ] 00:18:32.539 14:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.539 14:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@911 -- # return 0 00:18:32.539 14:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@320 -- # verify_raid_bdev_state Existed_Raid online raid5f 64 4 00:18:32.539 14:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:18:32.539 14:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:32.539 14:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:32.539 14:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:32.539 14:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:32.539 14:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:32.539 14:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:32.539 14:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:32.539 14:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:32.539 14:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:32.539 14:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:18:32.539 14:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.539 14:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:32.539 14:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.539 14:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:32.539 "name": "Existed_Raid", 00:18:32.539 "uuid": "d65973fc-6958-410c-bff7-42de6cd77392", 00:18:32.539 "strip_size_kb": 64, 00:18:32.539 "state": "online", 00:18:32.539 "raid_level": "raid5f", 00:18:32.539 "superblock": true, 00:18:32.539 "num_base_bdevs": 4, 00:18:32.539 "num_base_bdevs_discovered": 4, 00:18:32.539 "num_base_bdevs_operational": 4, 00:18:32.539 "base_bdevs_list": [ 00:18:32.539 { 00:18:32.539 "name": "NewBaseBdev", 00:18:32.539 "uuid": "ce39ce46-e1cb-4f00-ad48-5103d8676d17", 00:18:32.539 "is_configured": true, 00:18:32.539 "data_offset": 2048, 00:18:32.539 "data_size": 63488 00:18:32.539 }, 00:18:32.539 { 00:18:32.539 "name": "BaseBdev2", 00:18:32.539 "uuid": "0c43bf93-41d4-44ae-8654-978bde281cc2", 00:18:32.539 "is_configured": true, 00:18:32.539 "data_offset": 2048, 00:18:32.539 "data_size": 63488 00:18:32.539 }, 00:18:32.539 { 00:18:32.539 "name": "BaseBdev3", 00:18:32.539 "uuid": "8d575118-7bc4-42d1-9873-344544f88cce", 00:18:32.539 "is_configured": true, 00:18:32.539 "data_offset": 2048, 00:18:32.539 "data_size": 63488 00:18:32.539 }, 00:18:32.539 { 00:18:32.539 "name": "BaseBdev4", 00:18:32.539 "uuid": "69dfa80f-942f-4d7e-9757-a861b47df19f", 00:18:32.539 "is_configured": true, 00:18:32.539 "data_offset": 2048, 00:18:32.539 "data_size": 63488 00:18:32.539 } 00:18:32.539 ] 00:18:32.539 }' 00:18:32.539 14:18:09 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:32.539 14:18:09 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.107 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # verify_raid_bdev_properties Existed_Raid 00:18:33.107 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:18:33.107 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:33.107 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:33.107 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@184 -- # local name 00:18:33.107 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:33.107 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:33.107 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:18:33.107 14:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.107 14:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.107 [2024-11-27 14:18:10.165135] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:33.107 14:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.107 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:33.107 "name": "Existed_Raid", 00:18:33.107 "aliases": [ 00:18:33.107 "d65973fc-6958-410c-bff7-42de6cd77392" 00:18:33.107 ], 00:18:33.107 "product_name": "Raid Volume", 00:18:33.107 "block_size": 512, 00:18:33.107 "num_blocks": 190464, 00:18:33.107 "uuid": "d65973fc-6958-410c-bff7-42de6cd77392", 00:18:33.107 "assigned_rate_limits": { 00:18:33.107 "rw_ios_per_sec": 0, 00:18:33.107 "rw_mbytes_per_sec": 0, 00:18:33.107 "r_mbytes_per_sec": 0, 00:18:33.107 "w_mbytes_per_sec": 0 00:18:33.107 }, 00:18:33.107 "claimed": false, 00:18:33.107 "zoned": false, 00:18:33.107 "supported_io_types": { 00:18:33.107 "read": true, 00:18:33.107 "write": true, 00:18:33.107 "unmap": false, 00:18:33.107 "flush": false, 00:18:33.107 "reset": true, 00:18:33.107 "nvme_admin": false, 00:18:33.107 "nvme_io": false, 00:18:33.107 "nvme_io_md": false, 00:18:33.107 "write_zeroes": true, 00:18:33.107 "zcopy": false, 00:18:33.107 "get_zone_info": false, 00:18:33.107 "zone_management": false, 00:18:33.107 "zone_append": false, 00:18:33.107 "compare": false, 00:18:33.107 "compare_and_write": false, 00:18:33.107 "abort": false, 00:18:33.107 "seek_hole": false, 00:18:33.107 "seek_data": false, 00:18:33.107 "copy": false, 00:18:33.107 "nvme_iov_md": false 00:18:33.107 }, 00:18:33.107 "driver_specific": { 00:18:33.107 "raid": { 00:18:33.107 "uuid": "d65973fc-6958-410c-bff7-42de6cd77392", 00:18:33.107 "strip_size_kb": 64, 00:18:33.107 "state": "online", 00:18:33.107 "raid_level": "raid5f", 00:18:33.107 "superblock": true, 00:18:33.107 "num_base_bdevs": 4, 00:18:33.107 "num_base_bdevs_discovered": 4, 00:18:33.107 "num_base_bdevs_operational": 4, 00:18:33.107 "base_bdevs_list": [ 00:18:33.107 { 00:18:33.107 "name": "NewBaseBdev", 00:18:33.107 "uuid": "ce39ce46-e1cb-4f00-ad48-5103d8676d17", 00:18:33.107 "is_configured": true, 00:18:33.107 "data_offset": 2048, 00:18:33.107 "data_size": 63488 00:18:33.107 }, 00:18:33.107 { 00:18:33.107 "name": "BaseBdev2", 00:18:33.107 "uuid": "0c43bf93-41d4-44ae-8654-978bde281cc2", 00:18:33.107 "is_configured": true, 00:18:33.107 "data_offset": 2048, 00:18:33.107 "data_size": 63488 00:18:33.107 }, 00:18:33.107 { 00:18:33.107 "name": "BaseBdev3", 00:18:33.107 "uuid": "8d575118-7bc4-42d1-9873-344544f88cce", 00:18:33.107 "is_configured": true, 00:18:33.107 "data_offset": 2048, 00:18:33.107 "data_size": 63488 00:18:33.107 }, 00:18:33.107 { 00:18:33.107 "name": "BaseBdev4", 00:18:33.107 "uuid": "69dfa80f-942f-4d7e-9757-a861b47df19f", 00:18:33.107 "is_configured": true, 00:18:33.107 "data_offset": 2048, 00:18:33.107 "data_size": 63488 00:18:33.107 } 00:18:33.107 ] 00:18:33.107 } 00:18:33.107 } 00:18:33.107 }' 00:18:33.107 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:33.107 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@188 -- # base_bdev_names='NewBaseBdev 00:18:33.107 BaseBdev2 00:18:33.107 BaseBdev3 00:18:33.107 BaseBdev4' 00:18:33.107 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:33.107 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:33.107 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:33.107 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b NewBaseBdev 00:18:33.107 14:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.107 14:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.107 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:33.107 14:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev3 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev4 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:33.367 [2024-11-27 14:18:10.576980] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:18:33.367 [2024-11-27 14:18:10.577017] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:33.367 [2024-11-27 14:18:10.577120] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:33.367 [2024-11-27 14:18:10.577507] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:33.367 [2024-11-27 14:18:10.577525] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name Existed_Raid, state offline 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # killprocess 83738 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@954 -- # '[' -z 83738 ']' 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@958 -- # kill -0 83738 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # uname 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 83738 00:18:33.367 killing process with pid 83738 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:33.367 14:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:33.368 14:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 83738' 00:18:33.368 14:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@973 -- # kill 83738 00:18:33.368 [2024-11-27 14:18:10.615697] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:33.368 14:18:10 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@978 -- # wait 83738 00:18:33.935 [2024-11-27 14:18:10.962476] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:34.872 ************************************ 00:18:34.872 END TEST raid5f_state_function_test_sb 00:18:34.872 ************************************ 00:18:34.872 14:18:11 bdev_raid.raid5f_state_function_test_sb -- bdev/bdev_raid.sh@328 -- # return 0 00:18:34.872 00:18:34.872 real 0m13.302s 00:18:34.872 user 0m22.182s 00:18:34.872 sys 0m1.830s 00:18:34.872 14:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:34.872 14:18:11 bdev_raid.raid5f_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:18:34.872 14:18:12 bdev_raid -- bdev/bdev_raid.sh@988 -- # run_test raid5f_superblock_test raid_superblock_test raid5f 4 00:18:34.872 14:18:12 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:34.872 14:18:12 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:34.872 14:18:12 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:34.872 ************************************ 00:18:34.872 START TEST raid5f_superblock_test 00:18:34.872 ************************************ 00:18:34.872 14:18:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1129 -- # raid_superblock_test raid5f 4 00:18:34.872 14:18:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@393 -- # local raid_level=raid5f 00:18:34.872 14:18:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=4 00:18:34.872 14:18:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:18:34.872 14:18:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:18:34.872 14:18:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:18:34.872 14:18:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:18:34.872 14:18:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:18:34.872 14:18:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:18:34.872 14:18:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:18:34.872 14:18:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size 00:18:34.872 14:18:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:18:34.872 14:18:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:18:34.872 14:18:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:18:34.872 14:18:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@404 -- # '[' raid5f '!=' raid1 ']' 00:18:34.872 14:18:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size=64 00:18:34.872 14:18:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@406 -- # strip_size_create_arg='-z 64' 00:18:34.872 14:18:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@412 -- # raid_pid=84424 00:18:34.872 14:18:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@413 -- # waitforlisten 84424 00:18:34.872 14:18:12 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:18:34.872 14:18:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@835 -- # '[' -z 84424 ']' 00:18:34.872 14:18:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.872 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.872 14:18:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.872 14:18:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.872 14:18:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.872 14:18:12 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.131 [2024-11-27 14:18:12.159101] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:18:35.131 [2024-11-27 14:18:12.159302] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84424 ] 00:18:35.131 [2024-11-27 14:18:12.339609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.389 [2024-11-27 14:18:12.472355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.647 [2024-11-27 14:18:12.674531] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:35.647 [2024-11-27 14:18:12.674600] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@868 -- # return 0 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc1 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.907 malloc1 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.907 [2024-11-27 14:18:13.108749] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:35.907 [2024-11-27 14:18:13.108840] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.907 [2024-11-27 14:18:13.108873] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:35.907 [2024-11-27 14:18:13.108904] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.907 [2024-11-27 14:18:13.111712] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.907 [2024-11-27 14:18:13.111954] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:35.907 pt1 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc2 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.907 malloc2 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:35.907 [2024-11-27 14:18:13.157288] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:35.907 [2024-11-27 14:18:13.157364] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:35.907 [2024-11-27 14:18:13.157399] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:35.907 [2024-11-27 14:18:13.157411] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:35.907 [2024-11-27 14:18:13.160296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:35.907 [2024-11-27 14:18:13.160336] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:35.907 pt2 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc3 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt3 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc3 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.907 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.167 malloc3 00:18:36.167 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.167 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:36.167 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.167 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.167 [2024-11-27 14:18:13.230357] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:36.167 [2024-11-27 14:18:13.230462] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.167 [2024-11-27 14:18:13.230503] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:36.167 [2024-11-27 14:18:13.230522] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.167 [2024-11-27 14:18:13.234703] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.167 [2024-11-27 14:18:13.234814] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:36.167 pt3 00:18:36.167 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.167 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:36.167 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:36.167 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc4 00:18:36.167 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt4 00:18:36.167 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:18:36.167 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:18:36.167 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:18:36.167 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:18:36.167 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 512 -b malloc4 00:18:36.167 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.167 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.167 malloc4 00:18:36.167 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.167 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:36.167 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.167 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.167 [2024-11-27 14:18:13.301759] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:36.167 [2024-11-27 14:18:13.302080] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:36.167 [2024-11-27 14:18:13.302175] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:36.167 [2024-11-27 14:18:13.302424] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:36.167 [2024-11-27 14:18:13.305965] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:36.167 [2024-11-27 14:18:13.306159] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:36.167 pt4 00:18:36.167 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.167 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:18:36.167 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:18:36.167 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''pt1 pt2 pt3 pt4'\''' -n raid_bdev1 -s 00:18:36.168 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.168 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.168 [2024-11-27 14:18:13.314561] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:36.168 [2024-11-27 14:18:13.317553] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:36.168 [2024-11-27 14:18:13.317698] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:36.168 [2024-11-27 14:18:13.317816] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:36.168 [2024-11-27 14:18:13.318218] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:36.168 [2024-11-27 14:18:13.318248] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:36.168 [2024-11-27 14:18:13.318682] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:18:36.168 [2024-11-27 14:18:13.329890] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:36.168 [2024-11-27 14:18:13.329925] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:36.168 [2024-11-27 14:18:13.330261] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:36.168 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.168 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:36.168 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:36.168 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:36.168 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:36.168 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:36.168 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:36.168 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:36.168 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:36.168 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:36.168 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:36.168 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:36.168 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:36.168 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.168 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.168 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.168 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:36.168 "name": "raid_bdev1", 00:18:36.168 "uuid": "e3ad67b4-9c5e-4a9c-9169-d7658a7a5329", 00:18:36.168 "strip_size_kb": 64, 00:18:36.168 "state": "online", 00:18:36.168 "raid_level": "raid5f", 00:18:36.168 "superblock": true, 00:18:36.168 "num_base_bdevs": 4, 00:18:36.168 "num_base_bdevs_discovered": 4, 00:18:36.168 "num_base_bdevs_operational": 4, 00:18:36.168 "base_bdevs_list": [ 00:18:36.168 { 00:18:36.168 "name": "pt1", 00:18:36.168 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:36.168 "is_configured": true, 00:18:36.168 "data_offset": 2048, 00:18:36.168 "data_size": 63488 00:18:36.168 }, 00:18:36.168 { 00:18:36.168 "name": "pt2", 00:18:36.168 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:36.168 "is_configured": true, 00:18:36.168 "data_offset": 2048, 00:18:36.168 "data_size": 63488 00:18:36.168 }, 00:18:36.168 { 00:18:36.168 "name": "pt3", 00:18:36.168 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:36.168 "is_configured": true, 00:18:36.168 "data_offset": 2048, 00:18:36.168 "data_size": 63488 00:18:36.168 }, 00:18:36.168 { 00:18:36.168 "name": "pt4", 00:18:36.168 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:36.168 "is_configured": true, 00:18:36.168 "data_offset": 2048, 00:18:36.168 "data_size": 63488 00:18:36.168 } 00:18:36.168 ] 00:18:36.168 }' 00:18:36.168 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:36.168 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.738 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:18:36.738 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:36.738 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:36.738 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:36.738 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:36.738 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:36.738 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:36.738 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:36.738 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.738 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.738 [2024-11-27 14:18:13.840385] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:36.738 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.738 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:36.738 "name": "raid_bdev1", 00:18:36.738 "aliases": [ 00:18:36.738 "e3ad67b4-9c5e-4a9c-9169-d7658a7a5329" 00:18:36.738 ], 00:18:36.738 "product_name": "Raid Volume", 00:18:36.738 "block_size": 512, 00:18:36.738 "num_blocks": 190464, 00:18:36.738 "uuid": "e3ad67b4-9c5e-4a9c-9169-d7658a7a5329", 00:18:36.738 "assigned_rate_limits": { 00:18:36.738 "rw_ios_per_sec": 0, 00:18:36.738 "rw_mbytes_per_sec": 0, 00:18:36.738 "r_mbytes_per_sec": 0, 00:18:36.738 "w_mbytes_per_sec": 0 00:18:36.738 }, 00:18:36.738 "claimed": false, 00:18:36.738 "zoned": false, 00:18:36.738 "supported_io_types": { 00:18:36.738 "read": true, 00:18:36.738 "write": true, 00:18:36.738 "unmap": false, 00:18:36.738 "flush": false, 00:18:36.738 "reset": true, 00:18:36.738 "nvme_admin": false, 00:18:36.738 "nvme_io": false, 00:18:36.738 "nvme_io_md": false, 00:18:36.738 "write_zeroes": true, 00:18:36.738 "zcopy": false, 00:18:36.738 "get_zone_info": false, 00:18:36.738 "zone_management": false, 00:18:36.738 "zone_append": false, 00:18:36.738 "compare": false, 00:18:36.738 "compare_and_write": false, 00:18:36.738 "abort": false, 00:18:36.738 "seek_hole": false, 00:18:36.738 "seek_data": false, 00:18:36.738 "copy": false, 00:18:36.738 "nvme_iov_md": false 00:18:36.738 }, 00:18:36.738 "driver_specific": { 00:18:36.738 "raid": { 00:18:36.738 "uuid": "e3ad67b4-9c5e-4a9c-9169-d7658a7a5329", 00:18:36.738 "strip_size_kb": 64, 00:18:36.738 "state": "online", 00:18:36.738 "raid_level": "raid5f", 00:18:36.738 "superblock": true, 00:18:36.738 "num_base_bdevs": 4, 00:18:36.738 "num_base_bdevs_discovered": 4, 00:18:36.738 "num_base_bdevs_operational": 4, 00:18:36.738 "base_bdevs_list": [ 00:18:36.738 { 00:18:36.738 "name": "pt1", 00:18:36.738 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:36.738 "is_configured": true, 00:18:36.738 "data_offset": 2048, 00:18:36.738 "data_size": 63488 00:18:36.738 }, 00:18:36.738 { 00:18:36.738 "name": "pt2", 00:18:36.738 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:36.738 "is_configured": true, 00:18:36.738 "data_offset": 2048, 00:18:36.738 "data_size": 63488 00:18:36.738 }, 00:18:36.738 { 00:18:36.738 "name": "pt3", 00:18:36.738 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:36.738 "is_configured": true, 00:18:36.738 "data_offset": 2048, 00:18:36.738 "data_size": 63488 00:18:36.738 }, 00:18:36.738 { 00:18:36.738 "name": "pt4", 00:18:36.738 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:36.738 "is_configured": true, 00:18:36.738 "data_offset": 2048, 00:18:36.738 "data_size": 63488 00:18:36.738 } 00:18:36.738 ] 00:18:36.738 } 00:18:36.738 } 00:18:36.738 }' 00:18:36.738 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:36.738 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:36.738 pt2 00:18:36.738 pt3 00:18:36.738 pt4' 00:18:36.738 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:36.738 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:36.738 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:36.738 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:36.738 14:18:13 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:36.738 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.738 14:18:13 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:36.998 [2024-11-27 14:18:14.224437] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=e3ad67b4-9c5e-4a9c-9169-d7658a7a5329 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@436 -- # '[' -z e3ad67b4-9c5e-4a9c-9169-d7658a7a5329 ']' 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.998 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.258 [2024-11-27 14:18:14.276249] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:37.258 [2024-11-27 14:18:14.276282] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:37.258 [2024-11-27 14:18:14.276385] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:37.258 [2024-11-27 14:18:14.276509] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:37.258 [2024-11-27 14:18:14.276532] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:18:37.258 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.258 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.258 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.258 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.258 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:18:37.258 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.258 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:18:37.258 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:18:37.258 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:37.258 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:18:37.258 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.258 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.258 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.258 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:37.258 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:18:37.258 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.258 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt3 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt4 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@652 -- # local es=0 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''malloc1 malloc2 malloc3 malloc4'\''' -n raid_bdev1 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.259 [2024-11-27 14:18:14.436312] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:18:37.259 [2024-11-27 14:18:14.438962] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:18:37.259 [2024-11-27 14:18:14.439165] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:18:37.259 [2024-11-27 14:18:14.439251] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:18:37.259 [2024-11-27 14:18:14.439325] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:18:37.259 [2024-11-27 14:18:14.439432] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:18:37.259 [2024-11-27 14:18:14.439467] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:18:37.259 [2024-11-27 14:18:14.439497] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:18:37.259 [2024-11-27 14:18:14.439519] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:37.259 [2024-11-27 14:18:14.439535] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:18:37.259 request: 00:18:37.259 { 00:18:37.259 "name": "raid_bdev1", 00:18:37.259 "raid_level": "raid5f", 00:18:37.259 "base_bdevs": [ 00:18:37.259 "malloc1", 00:18:37.259 "malloc2", 00:18:37.259 "malloc3", 00:18:37.259 "malloc4" 00:18:37.259 ], 00:18:37.259 "strip_size_kb": 64, 00:18:37.259 "superblock": false, 00:18:37.259 "method": "bdev_raid_create", 00:18:37.259 "req_id": 1 00:18:37.259 } 00:18:37.259 Got JSON-RPC error response 00:18:37.259 response: 00:18:37.259 { 00:18:37.259 "code": -17, 00:18:37.259 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:18:37.259 } 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@655 -- # es=1 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.259 [2024-11-27 14:18:14.500307] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:37.259 [2024-11-27 14:18:14.500546] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.259 [2024-11-27 14:18:14.500617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:18:37.259 [2024-11-27 14:18:14.500854] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.259 [2024-11-27 14:18:14.503825] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.259 [2024-11-27 14:18:14.504008] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:37.259 [2024-11-27 14:18:14.504221] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:37.259 [2024-11-27 14:18:14.504416] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:37.259 pt1 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.259 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.519 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:37.519 "name": "raid_bdev1", 00:18:37.519 "uuid": "e3ad67b4-9c5e-4a9c-9169-d7658a7a5329", 00:18:37.519 "strip_size_kb": 64, 00:18:37.519 "state": "configuring", 00:18:37.519 "raid_level": "raid5f", 00:18:37.519 "superblock": true, 00:18:37.519 "num_base_bdevs": 4, 00:18:37.519 "num_base_bdevs_discovered": 1, 00:18:37.519 "num_base_bdevs_operational": 4, 00:18:37.519 "base_bdevs_list": [ 00:18:37.519 { 00:18:37.519 "name": "pt1", 00:18:37.519 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:37.519 "is_configured": true, 00:18:37.519 "data_offset": 2048, 00:18:37.519 "data_size": 63488 00:18:37.519 }, 00:18:37.519 { 00:18:37.519 "name": null, 00:18:37.519 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:37.519 "is_configured": false, 00:18:37.519 "data_offset": 2048, 00:18:37.519 "data_size": 63488 00:18:37.519 }, 00:18:37.519 { 00:18:37.519 "name": null, 00:18:37.519 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:37.519 "is_configured": false, 00:18:37.519 "data_offset": 2048, 00:18:37.519 "data_size": 63488 00:18:37.519 }, 00:18:37.519 { 00:18:37.519 "name": null, 00:18:37.519 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:37.519 "is_configured": false, 00:18:37.519 "data_offset": 2048, 00:18:37.519 "data_size": 63488 00:18:37.519 } 00:18:37.519 ] 00:18:37.519 }' 00:18:37.519 14:18:14 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:37.519 14:18:14 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.779 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@470 -- # '[' 4 -gt 2 ']' 00:18:37.779 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@472 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:37.779 14:18:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.779 14:18:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.779 [2024-11-27 14:18:15.032980] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:37.779 [2024-11-27 14:18:15.033076] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:37.779 [2024-11-27 14:18:15.033120] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:37.779 [2024-11-27 14:18:15.033137] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:37.779 [2024-11-27 14:18:15.033713] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:37.779 [2024-11-27 14:18:15.033751] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:37.779 [2024-11-27 14:18:15.033882] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:37.779 [2024-11-27 14:18:15.033920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:37.779 pt2 00:18:37.779 14:18:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.779 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@473 -- # rpc_cmd bdev_passthru_delete pt2 00:18:37.779 14:18:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.779 14:18:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:37.779 [2024-11-27 14:18:15.040958] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:18:37.779 14:18:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.779 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@474 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 4 00:18:37.779 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:37.779 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:37.779 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:37.779 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:37.779 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:37.779 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:37.779 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:37.779 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:37.779 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:37.779 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:37.779 14:18:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.779 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:37.779 14:18:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.038 14:18:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.038 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.038 "name": "raid_bdev1", 00:18:38.038 "uuid": "e3ad67b4-9c5e-4a9c-9169-d7658a7a5329", 00:18:38.038 "strip_size_kb": 64, 00:18:38.038 "state": "configuring", 00:18:38.038 "raid_level": "raid5f", 00:18:38.038 "superblock": true, 00:18:38.038 "num_base_bdevs": 4, 00:18:38.038 "num_base_bdevs_discovered": 1, 00:18:38.038 "num_base_bdevs_operational": 4, 00:18:38.038 "base_bdevs_list": [ 00:18:38.038 { 00:18:38.038 "name": "pt1", 00:18:38.038 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:38.038 "is_configured": true, 00:18:38.038 "data_offset": 2048, 00:18:38.038 "data_size": 63488 00:18:38.038 }, 00:18:38.038 { 00:18:38.038 "name": null, 00:18:38.038 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:38.038 "is_configured": false, 00:18:38.038 "data_offset": 0, 00:18:38.038 "data_size": 63488 00:18:38.038 }, 00:18:38.038 { 00:18:38.038 "name": null, 00:18:38.038 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:38.038 "is_configured": false, 00:18:38.038 "data_offset": 2048, 00:18:38.038 "data_size": 63488 00:18:38.038 }, 00:18:38.038 { 00:18:38.038 "name": null, 00:18:38.038 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:38.038 "is_configured": false, 00:18:38.038 "data_offset": 2048, 00:18:38.038 "data_size": 63488 00:18:38.038 } 00:18:38.038 ] 00:18:38.038 }' 00:18:38.038 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.038 14:18:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.603 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:18:38.603 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:38.603 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:38.603 14:18:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.603 14:18:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.603 [2024-11-27 14:18:15.585148] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:38.603 [2024-11-27 14:18:15.585277] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.603 [2024-11-27 14:18:15.585310] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:18:38.603 [2024-11-27 14:18:15.585324] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.603 [2024-11-27 14:18:15.585958] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.603 [2024-11-27 14:18:15.585985] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:38.603 [2024-11-27 14:18:15.586091] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:38.603 [2024-11-27 14:18:15.586123] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:38.603 pt2 00:18:38.603 14:18:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.603 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:38.603 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:38.603 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:38.603 14:18:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.603 14:18:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.603 [2024-11-27 14:18:15.597177] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:38.603 [2024-11-27 14:18:15.597275] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.603 [2024-11-27 14:18:15.597314] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:18:38.603 [2024-11-27 14:18:15.597331] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.604 [2024-11-27 14:18:15.597939] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.604 [2024-11-27 14:18:15.597971] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:38.604 [2024-11-27 14:18:15.598073] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:38.604 [2024-11-27 14:18:15.598116] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:38.604 pt3 00:18:38.604 14:18:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.604 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:38.604 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:38.604 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:38.604 14:18:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.604 14:18:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.604 [2024-11-27 14:18:15.609081] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:38.604 [2024-11-27 14:18:15.609318] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:38.604 [2024-11-27 14:18:15.609360] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:18:38.604 [2024-11-27 14:18:15.609376] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:38.604 [2024-11-27 14:18:15.609918] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:38.604 [2024-11-27 14:18:15.609953] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:38.604 [2024-11-27 14:18:15.610044] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:38.604 [2024-11-27 14:18:15.610077] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:38.604 [2024-11-27 14:18:15.610254] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:18:38.604 [2024-11-27 14:18:15.610276] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:38.604 [2024-11-27 14:18:15.610579] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:38.604 [2024-11-27 14:18:15.617336] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:18:38.604 [2024-11-27 14:18:15.617510] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:18:38.604 [2024-11-27 14:18:15.617866] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:38.604 pt4 00:18:38.604 14:18:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.604 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:18:38.604 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:18:38.604 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:38.604 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:38.604 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:38.604 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:38.604 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:38.604 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:38.604 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:38.604 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:38.604 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:38.604 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:38.604 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:38.604 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:38.604 14:18:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.604 14:18:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:38.604 14:18:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.604 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:38.604 "name": "raid_bdev1", 00:18:38.604 "uuid": "e3ad67b4-9c5e-4a9c-9169-d7658a7a5329", 00:18:38.604 "strip_size_kb": 64, 00:18:38.604 "state": "online", 00:18:38.604 "raid_level": "raid5f", 00:18:38.604 "superblock": true, 00:18:38.604 "num_base_bdevs": 4, 00:18:38.604 "num_base_bdevs_discovered": 4, 00:18:38.604 "num_base_bdevs_operational": 4, 00:18:38.604 "base_bdevs_list": [ 00:18:38.604 { 00:18:38.604 "name": "pt1", 00:18:38.604 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:38.604 "is_configured": true, 00:18:38.604 "data_offset": 2048, 00:18:38.604 "data_size": 63488 00:18:38.604 }, 00:18:38.604 { 00:18:38.604 "name": "pt2", 00:18:38.604 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:38.604 "is_configured": true, 00:18:38.604 "data_offset": 2048, 00:18:38.604 "data_size": 63488 00:18:38.604 }, 00:18:38.604 { 00:18:38.604 "name": "pt3", 00:18:38.604 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:38.604 "is_configured": true, 00:18:38.604 "data_offset": 2048, 00:18:38.604 "data_size": 63488 00:18:38.604 }, 00:18:38.604 { 00:18:38.604 "name": "pt4", 00:18:38.604 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:38.604 "is_configured": true, 00:18:38.604 "data_offset": 2048, 00:18:38.604 "data_size": 63488 00:18:38.604 } 00:18:38.604 ] 00:18:38.604 }' 00:18:38.604 14:18:15 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:38.604 14:18:15 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.172 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:18:39.172 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:18:39.172 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:18:39.172 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:18:39.172 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@184 -- # local name 00:18:39.172 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:18:39.172 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:39.172 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:18:39.172 14:18:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.172 14:18:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.172 [2024-11-27 14:18:16.161867] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:39.172 14:18:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.172 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:39.172 "name": "raid_bdev1", 00:18:39.172 "aliases": [ 00:18:39.172 "e3ad67b4-9c5e-4a9c-9169-d7658a7a5329" 00:18:39.172 ], 00:18:39.172 "product_name": "Raid Volume", 00:18:39.172 "block_size": 512, 00:18:39.172 "num_blocks": 190464, 00:18:39.172 "uuid": "e3ad67b4-9c5e-4a9c-9169-d7658a7a5329", 00:18:39.172 "assigned_rate_limits": { 00:18:39.172 "rw_ios_per_sec": 0, 00:18:39.172 "rw_mbytes_per_sec": 0, 00:18:39.172 "r_mbytes_per_sec": 0, 00:18:39.172 "w_mbytes_per_sec": 0 00:18:39.172 }, 00:18:39.172 "claimed": false, 00:18:39.172 "zoned": false, 00:18:39.172 "supported_io_types": { 00:18:39.172 "read": true, 00:18:39.172 "write": true, 00:18:39.172 "unmap": false, 00:18:39.172 "flush": false, 00:18:39.172 "reset": true, 00:18:39.172 "nvme_admin": false, 00:18:39.172 "nvme_io": false, 00:18:39.172 "nvme_io_md": false, 00:18:39.172 "write_zeroes": true, 00:18:39.172 "zcopy": false, 00:18:39.172 "get_zone_info": false, 00:18:39.172 "zone_management": false, 00:18:39.172 "zone_append": false, 00:18:39.172 "compare": false, 00:18:39.172 "compare_and_write": false, 00:18:39.172 "abort": false, 00:18:39.172 "seek_hole": false, 00:18:39.172 "seek_data": false, 00:18:39.172 "copy": false, 00:18:39.172 "nvme_iov_md": false 00:18:39.172 }, 00:18:39.172 "driver_specific": { 00:18:39.172 "raid": { 00:18:39.172 "uuid": "e3ad67b4-9c5e-4a9c-9169-d7658a7a5329", 00:18:39.172 "strip_size_kb": 64, 00:18:39.172 "state": "online", 00:18:39.172 "raid_level": "raid5f", 00:18:39.172 "superblock": true, 00:18:39.173 "num_base_bdevs": 4, 00:18:39.173 "num_base_bdevs_discovered": 4, 00:18:39.173 "num_base_bdevs_operational": 4, 00:18:39.173 "base_bdevs_list": [ 00:18:39.173 { 00:18:39.173 "name": "pt1", 00:18:39.173 "uuid": "00000000-0000-0000-0000-000000000001", 00:18:39.173 "is_configured": true, 00:18:39.173 "data_offset": 2048, 00:18:39.173 "data_size": 63488 00:18:39.173 }, 00:18:39.173 { 00:18:39.173 "name": "pt2", 00:18:39.173 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:39.173 "is_configured": true, 00:18:39.173 "data_offset": 2048, 00:18:39.173 "data_size": 63488 00:18:39.173 }, 00:18:39.173 { 00:18:39.173 "name": "pt3", 00:18:39.173 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:39.173 "is_configured": true, 00:18:39.173 "data_offset": 2048, 00:18:39.173 "data_size": 63488 00:18:39.173 }, 00:18:39.173 { 00:18:39.173 "name": "pt4", 00:18:39.173 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:39.173 "is_configured": true, 00:18:39.173 "data_offset": 2048, 00:18:39.173 "data_size": 63488 00:18:39.173 } 00:18:39.173 ] 00:18:39.173 } 00:18:39.173 } 00:18:39.173 }' 00:18:39.173 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:18:39.173 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:18:39.173 pt2 00:18:39.173 pt3 00:18:39.173 pt4' 00:18:39.173 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:39.173 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='512 ' 00:18:39.173 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:39.173 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:18:39.173 14:18:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.173 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:39.173 14:18:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.173 14:18:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.173 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:39.173 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:39.173 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:39.173 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:18:39.173 14:18:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.173 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:39.173 14:18:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.173 14:18:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.173 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:39.173 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:39.173 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:39.173 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:39.173 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt3 00:18:39.173 14:18:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.173 14:18:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.432 14:18:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.432 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:39.432 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:39.432 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:18:39.432 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt4 00:18:39.432 14:18:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.432 14:18:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.432 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:18:39.432 14:18:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.433 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='512 ' 00:18:39.433 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@193 -- # [[ 512 == \5\1\2\ \ \ ]] 00:18:39.433 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:39.433 14:18:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.433 14:18:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.433 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:18:39.433 [2024-11-27 14:18:16.558080] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:39.433 14:18:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.433 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@487 -- # '[' e3ad67b4-9c5e-4a9c-9169-d7658a7a5329 '!=' e3ad67b4-9c5e-4a9c-9169-d7658a7a5329 ']' 00:18:39.433 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@491 -- # has_redundancy raid5f 00:18:39.433 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@198 -- # case $1 in 00:18:39.433 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@199 -- # return 0 00:18:39.433 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:18:39.433 14:18:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.433 14:18:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.433 [2024-11-27 14:18:16.613912] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:18:39.433 14:18:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.433 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:39.433 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:39.433 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:39.433 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:39.433 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:39.433 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:39.433 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:39.433 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:39.433 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:39.433 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:39.433 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:39.433 14:18:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.433 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:39.433 14:18:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:39.433 14:18:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.433 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:39.433 "name": "raid_bdev1", 00:18:39.433 "uuid": "e3ad67b4-9c5e-4a9c-9169-d7658a7a5329", 00:18:39.433 "strip_size_kb": 64, 00:18:39.433 "state": "online", 00:18:39.433 "raid_level": "raid5f", 00:18:39.433 "superblock": true, 00:18:39.433 "num_base_bdevs": 4, 00:18:39.433 "num_base_bdevs_discovered": 3, 00:18:39.433 "num_base_bdevs_operational": 3, 00:18:39.433 "base_bdevs_list": [ 00:18:39.433 { 00:18:39.433 "name": null, 00:18:39.433 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:39.433 "is_configured": false, 00:18:39.433 "data_offset": 0, 00:18:39.433 "data_size": 63488 00:18:39.433 }, 00:18:39.433 { 00:18:39.433 "name": "pt2", 00:18:39.433 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:39.433 "is_configured": true, 00:18:39.433 "data_offset": 2048, 00:18:39.433 "data_size": 63488 00:18:39.433 }, 00:18:39.433 { 00:18:39.433 "name": "pt3", 00:18:39.433 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:39.433 "is_configured": true, 00:18:39.433 "data_offset": 2048, 00:18:39.433 "data_size": 63488 00:18:39.433 }, 00:18:39.433 { 00:18:39.433 "name": "pt4", 00:18:39.433 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:39.433 "is_configured": true, 00:18:39.433 "data_offset": 2048, 00:18:39.433 "data_size": 63488 00:18:39.433 } 00:18:39.433 ] 00:18:39.433 }' 00:18:39.433 14:18:16 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:39.433 14:18:16 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.000 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:40.000 14:18:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.000 14:18:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.000 [2024-11-27 14:18:17.126003] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:40.000 [2024-11-27 14:18:17.126042] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:40.000 [2024-11-27 14:18:17.126147] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:40.000 [2024-11-27 14:18:17.126279] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:40.000 [2024-11-27 14:18:17.126295] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:18:40.000 14:18:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.000 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.000 14:18:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.000 14:18:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.000 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt3 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt4 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.001 [2024-11-27 14:18:17.226065] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:18:40.001 [2024-11-27 14:18:17.226129] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.001 [2024-11-27 14:18:17.226157] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b480 00:18:40.001 [2024-11-27 14:18:17.226171] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.001 [2024-11-27 14:18:17.229215] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.001 [2024-11-27 14:18:17.229259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:18:40.001 [2024-11-27 14:18:17.229406] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:18:40.001 [2024-11-27 14:18:17.229464] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:40.001 pt2 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.001 14:18:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.260 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.260 "name": "raid_bdev1", 00:18:40.260 "uuid": "e3ad67b4-9c5e-4a9c-9169-d7658a7a5329", 00:18:40.260 "strip_size_kb": 64, 00:18:40.260 "state": "configuring", 00:18:40.260 "raid_level": "raid5f", 00:18:40.260 "superblock": true, 00:18:40.260 "num_base_bdevs": 4, 00:18:40.260 "num_base_bdevs_discovered": 1, 00:18:40.260 "num_base_bdevs_operational": 3, 00:18:40.260 "base_bdevs_list": [ 00:18:40.260 { 00:18:40.260 "name": null, 00:18:40.260 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.260 "is_configured": false, 00:18:40.260 "data_offset": 2048, 00:18:40.260 "data_size": 63488 00:18:40.260 }, 00:18:40.260 { 00:18:40.260 "name": "pt2", 00:18:40.260 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:40.260 "is_configured": true, 00:18:40.260 "data_offset": 2048, 00:18:40.260 "data_size": 63488 00:18:40.260 }, 00:18:40.260 { 00:18:40.260 "name": null, 00:18:40.260 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:40.260 "is_configured": false, 00:18:40.260 "data_offset": 2048, 00:18:40.260 "data_size": 63488 00:18:40.260 }, 00:18:40.260 { 00:18:40.260 "name": null, 00:18:40.260 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:40.260 "is_configured": false, 00:18:40.260 "data_offset": 2048, 00:18:40.260 "data_size": 63488 00:18:40.260 } 00:18:40.260 ] 00:18:40.260 }' 00:18:40.260 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.260 14:18:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.519 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:40.520 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:40.520 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@512 -- # rpc_cmd bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:18:40.520 14:18:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.520 14:18:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.520 [2024-11-27 14:18:17.766291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:18:40.520 [2024-11-27 14:18:17.766560] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:40.520 [2024-11-27 14:18:17.766639] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:18:40.520 [2024-11-27 14:18:17.766850] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:40.520 [2024-11-27 14:18:17.767497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:40.520 [2024-11-27 14:18:17.767666] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:18:40.520 [2024-11-27 14:18:17.767823] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:18:40.520 [2024-11-27 14:18:17.767858] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:40.520 pt3 00:18:40.520 14:18:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.520 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@515 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:40.520 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:40.520 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:40.520 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:40.520 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:40.520 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:40.520 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:40.520 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:40.520 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:40.520 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:40.520 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:40.520 14:18:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.520 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:40.520 14:18:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:40.520 14:18:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.779 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:40.779 "name": "raid_bdev1", 00:18:40.779 "uuid": "e3ad67b4-9c5e-4a9c-9169-d7658a7a5329", 00:18:40.779 "strip_size_kb": 64, 00:18:40.779 "state": "configuring", 00:18:40.779 "raid_level": "raid5f", 00:18:40.779 "superblock": true, 00:18:40.779 "num_base_bdevs": 4, 00:18:40.779 "num_base_bdevs_discovered": 2, 00:18:40.779 "num_base_bdevs_operational": 3, 00:18:40.779 "base_bdevs_list": [ 00:18:40.779 { 00:18:40.779 "name": null, 00:18:40.779 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:40.779 "is_configured": false, 00:18:40.779 "data_offset": 2048, 00:18:40.779 "data_size": 63488 00:18:40.779 }, 00:18:40.779 { 00:18:40.779 "name": "pt2", 00:18:40.779 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:40.779 "is_configured": true, 00:18:40.779 "data_offset": 2048, 00:18:40.779 "data_size": 63488 00:18:40.779 }, 00:18:40.779 { 00:18:40.779 "name": "pt3", 00:18:40.779 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:40.779 "is_configured": true, 00:18:40.779 "data_offset": 2048, 00:18:40.779 "data_size": 63488 00:18:40.779 }, 00:18:40.779 { 00:18:40.779 "name": null, 00:18:40.779 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:40.779 "is_configured": false, 00:18:40.779 "data_offset": 2048, 00:18:40.779 "data_size": 63488 00:18:40.779 } 00:18:40.779 ] 00:18:40.779 }' 00:18:40.779 14:18:17 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:40.779 14:18:17 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.038 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i++ )) 00:18:41.038 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:18:41.038 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@519 -- # i=3 00:18:41.038 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:41.038 14:18:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.038 14:18:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.038 [2024-11-27 14:18:18.310515] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:41.038 [2024-11-27 14:18:18.310623] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.038 [2024-11-27 14:18:18.310672] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000bd80 00:18:41.038 [2024-11-27 14:18:18.310713] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.038 [2024-11-27 14:18:18.311400] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.038 [2024-11-27 14:18:18.311426] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:41.038 [2024-11-27 14:18:18.311540] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:41.038 [2024-11-27 14:18:18.311576] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:41.038 [2024-11-27 14:18:18.311772] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:18:41.038 [2024-11-27 14:18:18.311826] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:41.038 [2024-11-27 14:18:18.312167] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:18:41.296 [2024-11-27 14:18:18.319201] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:18:41.296 [2024-11-27 14:18:18.319402] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:18:41.296 [2024-11-27 14:18:18.319783] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:41.296 pt4 00:18:41.296 14:18:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.296 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:41.296 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:41.296 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:41.296 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:41.297 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:41.297 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:41.297 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.297 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.297 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.297 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.297 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.297 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.297 14:18:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.297 14:18:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.297 14:18:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.297 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.297 "name": "raid_bdev1", 00:18:41.297 "uuid": "e3ad67b4-9c5e-4a9c-9169-d7658a7a5329", 00:18:41.297 "strip_size_kb": 64, 00:18:41.297 "state": "online", 00:18:41.297 "raid_level": "raid5f", 00:18:41.297 "superblock": true, 00:18:41.297 "num_base_bdevs": 4, 00:18:41.297 "num_base_bdevs_discovered": 3, 00:18:41.297 "num_base_bdevs_operational": 3, 00:18:41.297 "base_bdevs_list": [ 00:18:41.297 { 00:18:41.297 "name": null, 00:18:41.297 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.297 "is_configured": false, 00:18:41.297 "data_offset": 2048, 00:18:41.297 "data_size": 63488 00:18:41.297 }, 00:18:41.297 { 00:18:41.297 "name": "pt2", 00:18:41.297 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:41.297 "is_configured": true, 00:18:41.297 "data_offset": 2048, 00:18:41.297 "data_size": 63488 00:18:41.297 }, 00:18:41.297 { 00:18:41.297 "name": "pt3", 00:18:41.297 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:41.297 "is_configured": true, 00:18:41.297 "data_offset": 2048, 00:18:41.297 "data_size": 63488 00:18:41.297 }, 00:18:41.297 { 00:18:41.297 "name": "pt4", 00:18:41.297 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:41.297 "is_configured": true, 00:18:41.297 "data_offset": 2048, 00:18:41.297 "data_size": 63488 00:18:41.297 } 00:18:41.297 ] 00:18:41.297 }' 00:18:41.297 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.297 14:18:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.864 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:18:41.864 14:18:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.864 14:18:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.864 [2024-11-27 14:18:18.855492] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:41.864 [2024-11-27 14:18:18.855693] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:41.864 [2024-11-27 14:18:18.855856] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:41.864 [2024-11-27 14:18:18.855958] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:41.864 [2024-11-27 14:18:18.855979] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:18:41.864 14:18:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.864 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.864 14:18:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.864 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:18:41.864 14:18:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.864 14:18:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.864 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:18:41.864 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:18:41.864 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@532 -- # '[' 4 -gt 2 ']' 00:18:41.864 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@534 -- # i=3 00:18:41.864 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@535 -- # rpc_cmd bdev_passthru_delete pt4 00:18:41.864 14:18:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.864 14:18:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.864 14:18:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.864 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:18:41.864 14:18:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.864 14:18:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.864 [2024-11-27 14:18:18.935506] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:18:41.864 [2024-11-27 14:18:18.935766] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:41.864 [2024-11-27 14:18:18.935824] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c080 00:18:41.864 [2024-11-27 14:18:18.935863] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:41.864 [2024-11-27 14:18:18.938798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:41.865 [2024-11-27 14:18:18.938860] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:18:41.865 [2024-11-27 14:18:18.938967] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:18:41.865 [2024-11-27 14:18:18.939034] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:18:41.865 [2024-11-27 14:18:18.939246] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:18:41.865 [2024-11-27 14:18:18.939267] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:41.865 [2024-11-27 14:18:18.939286] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:18:41.865 [2024-11-27 14:18:18.939406] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:18:41.865 [2024-11-27 14:18:18.939560] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:18:41.865 pt1 00:18:41.865 14:18:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.865 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@542 -- # '[' 4 -gt 2 ']' 00:18:41.865 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@545 -- # verify_raid_bdev_state raid_bdev1 configuring raid5f 64 3 00:18:41.865 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:41.865 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:18:41.865 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:41.865 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:41.865 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:41.865 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:41.865 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:41.865 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:41.865 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:41.865 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:41.865 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:41.865 14:18:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.865 14:18:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:41.865 14:18:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.865 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:41.865 "name": "raid_bdev1", 00:18:41.865 "uuid": "e3ad67b4-9c5e-4a9c-9169-d7658a7a5329", 00:18:41.865 "strip_size_kb": 64, 00:18:41.865 "state": "configuring", 00:18:41.865 "raid_level": "raid5f", 00:18:41.865 "superblock": true, 00:18:41.865 "num_base_bdevs": 4, 00:18:41.865 "num_base_bdevs_discovered": 2, 00:18:41.865 "num_base_bdevs_operational": 3, 00:18:41.865 "base_bdevs_list": [ 00:18:41.865 { 00:18:41.865 "name": null, 00:18:41.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:41.865 "is_configured": false, 00:18:41.865 "data_offset": 2048, 00:18:41.865 "data_size": 63488 00:18:41.865 }, 00:18:41.865 { 00:18:41.865 "name": "pt2", 00:18:41.865 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:41.865 "is_configured": true, 00:18:41.865 "data_offset": 2048, 00:18:41.865 "data_size": 63488 00:18:41.865 }, 00:18:41.865 { 00:18:41.865 "name": "pt3", 00:18:41.865 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:41.865 "is_configured": true, 00:18:41.865 "data_offset": 2048, 00:18:41.865 "data_size": 63488 00:18:41.865 }, 00:18:41.865 { 00:18:41.865 "name": null, 00:18:41.865 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:41.865 "is_configured": false, 00:18:41.865 "data_offset": 2048, 00:18:41.865 "data_size": 63488 00:18:41.865 } 00:18:41.865 ] 00:18:41.865 }' 00:18:41.865 14:18:18 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:41.865 14:18:18 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.436 14:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # rpc_cmd bdev_raid_get_bdevs configuring 00:18:42.436 14:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:42.436 14:18:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.436 14:18:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.436 14:18:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.436 14:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@546 -- # [[ false == \f\a\l\s\e ]] 00:18:42.436 14:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@549 -- # rpc_cmd bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:18:42.436 14:18:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.436 14:18:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.436 [2024-11-27 14:18:19.535900] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:18:42.436 [2024-11-27 14:18:19.536122] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:42.436 [2024-11-27 14:18:19.536181] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c680 00:18:42.436 [2024-11-27 14:18:19.536197] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:42.436 [2024-11-27 14:18:19.536821] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:42.436 [2024-11-27 14:18:19.536846] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:18:42.436 [2024-11-27 14:18:19.536969] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:18:42.436 [2024-11-27 14:18:19.537001] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:18:42.436 [2024-11-27 14:18:19.537212] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:18:42.436 [2024-11-27 14:18:19.537226] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:18:42.436 [2024-11-27 14:18:19.537579] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:42.436 [2024-11-27 14:18:19.544021] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:18:42.436 [2024-11-27 14:18:19.544051] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:18:42.436 [2024-11-27 14:18:19.544421] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:42.436 pt4 00:18:42.436 14:18:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.436 14:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:42.436 14:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:42.436 14:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:42.436 14:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:42.436 14:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:42.436 14:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:42.436 14:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:42.436 14:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:42.436 14:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:42.436 14:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:42.436 14:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:42.436 14:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:42.436 14:18:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.436 14:18:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:42.436 14:18:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.436 14:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:42.436 "name": "raid_bdev1", 00:18:42.436 "uuid": "e3ad67b4-9c5e-4a9c-9169-d7658a7a5329", 00:18:42.436 "strip_size_kb": 64, 00:18:42.436 "state": "online", 00:18:42.436 "raid_level": "raid5f", 00:18:42.436 "superblock": true, 00:18:42.436 "num_base_bdevs": 4, 00:18:42.436 "num_base_bdevs_discovered": 3, 00:18:42.436 "num_base_bdevs_operational": 3, 00:18:42.436 "base_bdevs_list": [ 00:18:42.436 { 00:18:42.436 "name": null, 00:18:42.436 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:42.436 "is_configured": false, 00:18:42.436 "data_offset": 2048, 00:18:42.436 "data_size": 63488 00:18:42.436 }, 00:18:42.436 { 00:18:42.436 "name": "pt2", 00:18:42.436 "uuid": "00000000-0000-0000-0000-000000000002", 00:18:42.436 "is_configured": true, 00:18:42.437 "data_offset": 2048, 00:18:42.437 "data_size": 63488 00:18:42.437 }, 00:18:42.437 { 00:18:42.437 "name": "pt3", 00:18:42.437 "uuid": "00000000-0000-0000-0000-000000000003", 00:18:42.437 "is_configured": true, 00:18:42.437 "data_offset": 2048, 00:18:42.437 "data_size": 63488 00:18:42.437 }, 00:18:42.437 { 00:18:42.437 "name": "pt4", 00:18:42.437 "uuid": "00000000-0000-0000-0000-000000000004", 00:18:42.437 "is_configured": true, 00:18:42.437 "data_offset": 2048, 00:18:42.437 "data_size": 63488 00:18:42.437 } 00:18:42.437 ] 00:18:42.437 }' 00:18:42.437 14:18:19 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:42.437 14:18:19 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.011 14:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:18:43.012 14:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:18:43.012 14:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.012 14:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.012 14:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.012 14:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:18:43.012 14:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:18:43.012 14:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:43.012 14:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.012 14:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:43.012 [2024-11-27 14:18:20.140339] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:43.012 14:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.012 14:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@558 -- # '[' e3ad67b4-9c5e-4a9c-9169-d7658a7a5329 '!=' e3ad67b4-9c5e-4a9c-9169-d7658a7a5329 ']' 00:18:43.012 14:18:20 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@563 -- # killprocess 84424 00:18:43.012 14:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@954 -- # '[' -z 84424 ']' 00:18:43.012 14:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@958 -- # kill -0 84424 00:18:43.012 14:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # uname 00:18:43.012 14:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:43.012 14:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84424 00:18:43.012 14:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:43.012 14:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:43.012 killing process with pid 84424 00:18:43.012 14:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84424' 00:18:43.012 14:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@973 -- # kill 84424 00:18:43.012 [2024-11-27 14:18:20.221174] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:43.012 14:18:20 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@978 -- # wait 84424 00:18:43.012 [2024-11-27 14:18:20.221293] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:43.012 [2024-11-27 14:18:20.221393] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:43.012 [2024-11-27 14:18:20.221419] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:18:43.580 [2024-11-27 14:18:20.582976] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:44.517 14:18:21 bdev_raid.raid5f_superblock_test -- bdev/bdev_raid.sh@565 -- # return 0 00:18:44.517 00:18:44.517 real 0m9.542s 00:18:44.517 user 0m15.642s 00:18:44.518 sys 0m1.434s 00:18:44.518 14:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:44.518 14:18:21 bdev_raid.raid5f_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.518 ************************************ 00:18:44.518 END TEST raid5f_superblock_test 00:18:44.518 ************************************ 00:18:44.518 14:18:21 bdev_raid -- bdev/bdev_raid.sh@989 -- # '[' true = true ']' 00:18:44.518 14:18:21 bdev_raid -- bdev/bdev_raid.sh@990 -- # run_test raid5f_rebuild_test raid_rebuild_test raid5f 4 false false true 00:18:44.518 14:18:21 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:18:44.518 14:18:21 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:44.518 14:18:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:44.518 ************************************ 00:18:44.518 START TEST raid5f_rebuild_test 00:18:44.518 ************************************ 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 false false true 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@571 -- # local superblock=false 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@573 -- # local verify=true 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@576 -- # local strip_size 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@577 -- # local create_arg 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@579 -- # local data_offset 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@592 -- # '[' false = true ']' 00:18:44.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@597 -- # raid_pid=84915 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@598 -- # waitforlisten 84915 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@835 -- # '[' -z 84915 ']' 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:44.518 14:18:21 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:44.518 [2024-11-27 14:18:21.766303] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:18:44.518 [2024-11-27 14:18:21.766718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84915 ] 00:18:44.518 I/O size of 3145728 is greater than zero copy threshold (65536). 00:18:44.518 Zero copy mechanism will not be used. 00:18:44.777 [2024-11-27 14:18:21.958611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:45.036 [2024-11-27 14:18:22.113950] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:45.295 [2024-11-27 14:18:22.316237] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:45.295 [2024-11-27 14:18:22.316587] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:18:45.554 14:18:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:45.554 14:18:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@868 -- # return 0 00:18:45.554 14:18:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:45.554 14:18:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:18:45.554 14:18:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.554 14:18:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.554 BaseBdev1_malloc 00:18:45.554 14:18:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.554 14:18:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:45.554 14:18:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.554 14:18:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.554 [2024-11-27 14:18:22.804005] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:45.554 [2024-11-27 14:18:22.804082] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.554 [2024-11-27 14:18:22.804114] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:18:45.554 [2024-11-27 14:18:22.804132] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.554 [2024-11-27 14:18:22.806954] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.554 [2024-11-27 14:18:22.807170] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:45.554 BaseBdev1 00:18:45.554 14:18:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.554 14:18:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:45.554 14:18:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:18:45.554 14:18:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.554 14:18:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.814 BaseBdev2_malloc 00:18:45.814 14:18:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.814 14:18:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:18:45.814 14:18:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.814 14:18:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.814 [2024-11-27 14:18:22.857061] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:18:45.814 [2024-11-27 14:18:22.857137] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.814 [2024-11-27 14:18:22.857170] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:18:45.814 [2024-11-27 14:18:22.857188] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.814 [2024-11-27 14:18:22.860013] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.814 [2024-11-27 14:18:22.860062] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:18:45.814 BaseBdev2 00:18:45.814 14:18:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.814 14:18:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:45.814 14:18:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:18:45.814 14:18:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.814 14:18:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.814 BaseBdev3_malloc 00:18:45.814 14:18:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.814 14:18:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:18:45.814 14:18:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.814 14:18:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.814 [2024-11-27 14:18:22.920036] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:18:45.814 [2024-11-27 14:18:22.920113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.814 [2024-11-27 14:18:22.920148] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:18:45.814 [2024-11-27 14:18:22.920167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.814 [2024-11-27 14:18:22.923110] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.814 [2024-11-27 14:18:22.923312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:18:45.814 BaseBdev3 00:18:45.814 14:18:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.814 14:18:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:18:45.814 14:18:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:18:45.814 14:18:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.814 14:18:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.814 BaseBdev4_malloc 00:18:45.814 14:18:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.814 14:18:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:18:45.814 14:18:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.814 14:18:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.814 [2024-11-27 14:18:22.976541] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:18:45.815 [2024-11-27 14:18:22.976612] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.815 [2024-11-27 14:18:22.976642] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:18:45.815 [2024-11-27 14:18:22.976658] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.815 [2024-11-27 14:18:22.979389] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.815 [2024-11-27 14:18:22.979452] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:18:45.815 BaseBdev4 00:18:45.815 14:18:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.815 14:18:22 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:18:45.815 14:18:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.815 14:18:22 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.815 spare_malloc 00:18:45.815 14:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.815 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:18:45.815 14:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.815 14:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.815 spare_delay 00:18:45.815 14:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.815 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:18:45.815 14:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.815 14:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.815 [2024-11-27 14:18:23.036657] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:45.815 [2024-11-27 14:18:23.036723] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:45.815 [2024-11-27 14:18:23.036749] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:18:45.815 [2024-11-27 14:18:23.036766] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:45.815 [2024-11-27 14:18:23.039700] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:45.815 [2024-11-27 14:18:23.039939] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:45.815 spare 00:18:45.815 14:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.815 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:18:45.815 14:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.815 14:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.815 [2024-11-27 14:18:23.044815] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:45.815 [2024-11-27 14:18:23.047445] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:45.815 [2024-11-27 14:18:23.047688] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:18:45.815 [2024-11-27 14:18:23.047794] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:18:45.815 [2024-11-27 14:18:23.047998] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:18:45.815 [2024-11-27 14:18:23.048035] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:18:45.815 [2024-11-27 14:18:23.048445] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:18:45.815 [2024-11-27 14:18:23.055117] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:18:45.815 [2024-11-27 14:18:23.055289] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:18:45.815 [2024-11-27 14:18:23.055735] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:45.815 14:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.815 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:18:45.815 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:45.815 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:45.815 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:45.815 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:45.815 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:18:45.815 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:45.815 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:45.815 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:45.815 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:45.815 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:45.815 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:45.815 14:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.815 14:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:45.815 14:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.074 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:46.074 "name": "raid_bdev1", 00:18:46.074 "uuid": "859122f1-ac4d-42e9-98e2-9e08cfcbf6fb", 00:18:46.074 "strip_size_kb": 64, 00:18:46.074 "state": "online", 00:18:46.074 "raid_level": "raid5f", 00:18:46.074 "superblock": false, 00:18:46.074 "num_base_bdevs": 4, 00:18:46.074 "num_base_bdevs_discovered": 4, 00:18:46.074 "num_base_bdevs_operational": 4, 00:18:46.074 "base_bdevs_list": [ 00:18:46.074 { 00:18:46.074 "name": "BaseBdev1", 00:18:46.074 "uuid": "0e047797-3abf-5114-9724-be7c3116a0f9", 00:18:46.074 "is_configured": true, 00:18:46.074 "data_offset": 0, 00:18:46.074 "data_size": 65536 00:18:46.074 }, 00:18:46.074 { 00:18:46.074 "name": "BaseBdev2", 00:18:46.074 "uuid": "4d72b189-ba59-5903-95ed-7e0ef752b7c5", 00:18:46.074 "is_configured": true, 00:18:46.074 "data_offset": 0, 00:18:46.074 "data_size": 65536 00:18:46.074 }, 00:18:46.074 { 00:18:46.074 "name": "BaseBdev3", 00:18:46.074 "uuid": "d27c9422-6dd9-5d51-a123-921a868c0556", 00:18:46.074 "is_configured": true, 00:18:46.074 "data_offset": 0, 00:18:46.074 "data_size": 65536 00:18:46.074 }, 00:18:46.074 { 00:18:46.074 "name": "BaseBdev4", 00:18:46.074 "uuid": "1a560a11-40c4-5d9e-bbc4-d314acc92eb9", 00:18:46.074 "is_configured": true, 00:18:46.074 "data_offset": 0, 00:18:46.074 "data_size": 65536 00:18:46.074 } 00:18:46.074 ] 00:18:46.074 }' 00:18:46.074 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:46.074 14:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.332 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:18:46.332 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:18:46.332 14:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.332 14:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.332 [2024-11-27 14:18:23.580045] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:18:46.332 14:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.591 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=196608 00:18:46.591 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:46.591 14:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.591 14:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:46.591 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:18:46.591 14:18:23 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.591 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@619 -- # data_offset=0 00:18:46.591 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:18:46.591 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:18:46.591 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:18:46.591 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:18:46.591 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:18:46.591 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:18:46.591 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:18:46.591 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:18:46.591 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:18:46.591 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:18:46.591 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:18:46.591 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:46.591 14:18:23 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:18:46.850 [2024-11-27 14:18:23.971960] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:18:46.851 /dev/nbd0 00:18:46.851 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:18:46.851 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:18:46.851 14:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:18:46.851 14:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:18:46.851 14:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:18:46.851 14:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:18:46.851 14:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:18:46.851 14:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:18:46.851 14:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:18:46.851 14:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:18:46.851 14:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:18:46.851 1+0 records in 00:18:46.851 1+0 records out 00:18:46.851 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000376236 s, 10.9 MB/s 00:18:46.851 14:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.851 14:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:18:46.851 14:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:18:46.851 14:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:18:46.851 14:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:18:46.851 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:18:46.851 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:18:46.851 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:18:46.851 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:18:46.851 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@631 -- # echo 192 00:18:46.851 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=512 oflag=direct 00:18:47.789 512+0 records in 00:18:47.789 512+0 records out 00:18:47.789 100663296 bytes (101 MB, 96 MiB) copied, 0.679233 s, 148 MB/s 00:18:47.789 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:18:47.789 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:18:47.789 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:18:47.789 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:18:47.789 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:18:47.789 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:18:47.789 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:18:47.789 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:18:47.789 [2024-11-27 14:18:24.973538] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:47.789 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:18:47.789 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:18:47.789 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:18:47.789 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:18:47.789 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:18:47.789 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:18:47.789 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:18:47.789 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:18:47.789 14:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.789 14:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.789 [2024-11-27 14:18:24.989378] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:18:47.789 14:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.789 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:47.789 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:47.789 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:47.789 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:47.789 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:47.789 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:47.789 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:47.789 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:47.789 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:47.789 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:47.789 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:47.789 14:18:24 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:47.789 14:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.789 14:18:24 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:47.789 14:18:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.789 14:18:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:47.789 "name": "raid_bdev1", 00:18:47.789 "uuid": "859122f1-ac4d-42e9-98e2-9e08cfcbf6fb", 00:18:47.789 "strip_size_kb": 64, 00:18:47.789 "state": "online", 00:18:47.789 "raid_level": "raid5f", 00:18:47.789 "superblock": false, 00:18:47.789 "num_base_bdevs": 4, 00:18:47.789 "num_base_bdevs_discovered": 3, 00:18:47.789 "num_base_bdevs_operational": 3, 00:18:47.789 "base_bdevs_list": [ 00:18:47.789 { 00:18:47.789 "name": null, 00:18:47.789 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:47.789 "is_configured": false, 00:18:47.789 "data_offset": 0, 00:18:47.789 "data_size": 65536 00:18:47.789 }, 00:18:47.789 { 00:18:47.789 "name": "BaseBdev2", 00:18:47.789 "uuid": "4d72b189-ba59-5903-95ed-7e0ef752b7c5", 00:18:47.789 "is_configured": true, 00:18:47.789 "data_offset": 0, 00:18:47.789 "data_size": 65536 00:18:47.789 }, 00:18:47.789 { 00:18:47.789 "name": "BaseBdev3", 00:18:47.789 "uuid": "d27c9422-6dd9-5d51-a123-921a868c0556", 00:18:47.789 "is_configured": true, 00:18:47.789 "data_offset": 0, 00:18:47.789 "data_size": 65536 00:18:47.789 }, 00:18:47.789 { 00:18:47.789 "name": "BaseBdev4", 00:18:47.789 "uuid": "1a560a11-40c4-5d9e-bbc4-d314acc92eb9", 00:18:47.789 "is_configured": true, 00:18:47.789 "data_offset": 0, 00:18:47.789 "data_size": 65536 00:18:47.789 } 00:18:47.789 ] 00:18:47.789 }' 00:18:47.789 14:18:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:47.789 14:18:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.357 14:18:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:48.357 14:18:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.357 14:18:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:48.357 [2024-11-27 14:18:25.485610] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:48.357 [2024-11-27 14:18:25.499694] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b750 00:18:48.357 14:18:25 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.358 14:18:25 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@647 -- # sleep 1 00:18:48.358 [2024-11-27 14:18:25.508532] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:49.323 14:18:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:49.323 14:18:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:49.323 14:18:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:49.323 14:18:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:49.323 14:18:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:49.323 14:18:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.323 14:18:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.323 14:18:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.323 14:18:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.323 14:18:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.323 14:18:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:49.323 "name": "raid_bdev1", 00:18:49.323 "uuid": "859122f1-ac4d-42e9-98e2-9e08cfcbf6fb", 00:18:49.323 "strip_size_kb": 64, 00:18:49.323 "state": "online", 00:18:49.323 "raid_level": "raid5f", 00:18:49.323 "superblock": false, 00:18:49.323 "num_base_bdevs": 4, 00:18:49.323 "num_base_bdevs_discovered": 4, 00:18:49.323 "num_base_bdevs_operational": 4, 00:18:49.323 "process": { 00:18:49.323 "type": "rebuild", 00:18:49.323 "target": "spare", 00:18:49.323 "progress": { 00:18:49.323 "blocks": 17280, 00:18:49.323 "percent": 8 00:18:49.323 } 00:18:49.323 }, 00:18:49.323 "base_bdevs_list": [ 00:18:49.323 { 00:18:49.323 "name": "spare", 00:18:49.323 "uuid": "ad9664ab-c1c7-50bc-9de5-987ce9b1d1c0", 00:18:49.323 "is_configured": true, 00:18:49.323 "data_offset": 0, 00:18:49.323 "data_size": 65536 00:18:49.323 }, 00:18:49.323 { 00:18:49.323 "name": "BaseBdev2", 00:18:49.323 "uuid": "4d72b189-ba59-5903-95ed-7e0ef752b7c5", 00:18:49.323 "is_configured": true, 00:18:49.323 "data_offset": 0, 00:18:49.323 "data_size": 65536 00:18:49.323 }, 00:18:49.323 { 00:18:49.323 "name": "BaseBdev3", 00:18:49.323 "uuid": "d27c9422-6dd9-5d51-a123-921a868c0556", 00:18:49.323 "is_configured": true, 00:18:49.323 "data_offset": 0, 00:18:49.323 "data_size": 65536 00:18:49.323 }, 00:18:49.323 { 00:18:49.323 "name": "BaseBdev4", 00:18:49.323 "uuid": "1a560a11-40c4-5d9e-bbc4-d314acc92eb9", 00:18:49.323 "is_configured": true, 00:18:49.323 "data_offset": 0, 00:18:49.323 "data_size": 65536 00:18:49.323 } 00:18:49.323 ] 00:18:49.323 }' 00:18:49.323 14:18:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:49.581 14:18:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:49.581 14:18:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:49.581 14:18:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:49.581 14:18:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:18:49.581 14:18:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.581 14:18:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.581 [2024-11-27 14:18:26.678407] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:49.581 [2024-11-27 14:18:26.721765] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:18:49.581 [2024-11-27 14:18:26.721909] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:49.581 [2024-11-27 14:18:26.721934] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:49.581 [2024-11-27 14:18:26.721949] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:18:49.581 14:18:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.581 14:18:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:18:49.582 14:18:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:18:49.582 14:18:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:18:49.582 14:18:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:18:49.582 14:18:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:18:49.582 14:18:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:18:49.582 14:18:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:18:49.582 14:18:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:18:49.582 14:18:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:18:49.582 14:18:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:18:49.582 14:18:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:49.582 14:18:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:49.582 14:18:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.582 14:18:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:49.582 14:18:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.582 14:18:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:18:49.582 "name": "raid_bdev1", 00:18:49.582 "uuid": "859122f1-ac4d-42e9-98e2-9e08cfcbf6fb", 00:18:49.582 "strip_size_kb": 64, 00:18:49.582 "state": "online", 00:18:49.582 "raid_level": "raid5f", 00:18:49.582 "superblock": false, 00:18:49.582 "num_base_bdevs": 4, 00:18:49.582 "num_base_bdevs_discovered": 3, 00:18:49.582 "num_base_bdevs_operational": 3, 00:18:49.582 "base_bdevs_list": [ 00:18:49.582 { 00:18:49.582 "name": null, 00:18:49.582 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:49.582 "is_configured": false, 00:18:49.582 "data_offset": 0, 00:18:49.582 "data_size": 65536 00:18:49.582 }, 00:18:49.582 { 00:18:49.582 "name": "BaseBdev2", 00:18:49.582 "uuid": "4d72b189-ba59-5903-95ed-7e0ef752b7c5", 00:18:49.582 "is_configured": true, 00:18:49.582 "data_offset": 0, 00:18:49.582 "data_size": 65536 00:18:49.582 }, 00:18:49.582 { 00:18:49.582 "name": "BaseBdev3", 00:18:49.582 "uuid": "d27c9422-6dd9-5d51-a123-921a868c0556", 00:18:49.582 "is_configured": true, 00:18:49.582 "data_offset": 0, 00:18:49.582 "data_size": 65536 00:18:49.582 }, 00:18:49.582 { 00:18:49.582 "name": "BaseBdev4", 00:18:49.582 "uuid": "1a560a11-40c4-5d9e-bbc4-d314acc92eb9", 00:18:49.582 "is_configured": true, 00:18:49.582 "data_offset": 0, 00:18:49.582 "data_size": 65536 00:18:49.582 } 00:18:49.582 ] 00:18:49.582 }' 00:18:49.582 14:18:26 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:18:49.582 14:18:26 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.148 14:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:50.148 14:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:50.148 14:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:18:50.148 14:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:18:50.148 14:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:50.148 14:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:50.148 14:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:50.148 14:18:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.148 14:18:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.148 14:18:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.148 14:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:50.148 "name": "raid_bdev1", 00:18:50.148 "uuid": "859122f1-ac4d-42e9-98e2-9e08cfcbf6fb", 00:18:50.148 "strip_size_kb": 64, 00:18:50.148 "state": "online", 00:18:50.148 "raid_level": "raid5f", 00:18:50.148 "superblock": false, 00:18:50.148 "num_base_bdevs": 4, 00:18:50.148 "num_base_bdevs_discovered": 3, 00:18:50.148 "num_base_bdevs_operational": 3, 00:18:50.148 "base_bdevs_list": [ 00:18:50.148 { 00:18:50.148 "name": null, 00:18:50.148 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:50.148 "is_configured": false, 00:18:50.148 "data_offset": 0, 00:18:50.148 "data_size": 65536 00:18:50.148 }, 00:18:50.148 { 00:18:50.148 "name": "BaseBdev2", 00:18:50.148 "uuid": "4d72b189-ba59-5903-95ed-7e0ef752b7c5", 00:18:50.148 "is_configured": true, 00:18:50.148 "data_offset": 0, 00:18:50.148 "data_size": 65536 00:18:50.148 }, 00:18:50.148 { 00:18:50.148 "name": "BaseBdev3", 00:18:50.148 "uuid": "d27c9422-6dd9-5d51-a123-921a868c0556", 00:18:50.148 "is_configured": true, 00:18:50.148 "data_offset": 0, 00:18:50.148 "data_size": 65536 00:18:50.148 }, 00:18:50.148 { 00:18:50.148 "name": "BaseBdev4", 00:18:50.148 "uuid": "1a560a11-40c4-5d9e-bbc4-d314acc92eb9", 00:18:50.148 "is_configured": true, 00:18:50.148 "data_offset": 0, 00:18:50.148 "data_size": 65536 00:18:50.148 } 00:18:50.148 ] 00:18:50.148 }' 00:18:50.148 14:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:50.148 14:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:18:50.148 14:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:50.148 14:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:18:50.148 14:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:18:50.148 14:18:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.148 14:18:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:50.148 [2024-11-27 14:18:27.422902] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:50.406 [2024-11-27 14:18:27.438104] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002b820 00:18:50.406 14:18:27 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.406 14:18:27 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@663 -- # sleep 1 00:18:50.406 [2024-11-27 14:18:27.447065] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:51.340 14:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:51.340 14:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.340 14:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:51.340 14:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:51.340 14:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.340 14:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.340 14:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.340 14:18:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.340 14:18:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.340 14:18:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.340 14:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.340 "name": "raid_bdev1", 00:18:51.340 "uuid": "859122f1-ac4d-42e9-98e2-9e08cfcbf6fb", 00:18:51.340 "strip_size_kb": 64, 00:18:51.340 "state": "online", 00:18:51.340 "raid_level": "raid5f", 00:18:51.340 "superblock": false, 00:18:51.340 "num_base_bdevs": 4, 00:18:51.340 "num_base_bdevs_discovered": 4, 00:18:51.341 "num_base_bdevs_operational": 4, 00:18:51.341 "process": { 00:18:51.341 "type": "rebuild", 00:18:51.341 "target": "spare", 00:18:51.341 "progress": { 00:18:51.341 "blocks": 17280, 00:18:51.341 "percent": 8 00:18:51.341 } 00:18:51.341 }, 00:18:51.341 "base_bdevs_list": [ 00:18:51.341 { 00:18:51.341 "name": "spare", 00:18:51.341 "uuid": "ad9664ab-c1c7-50bc-9de5-987ce9b1d1c0", 00:18:51.341 "is_configured": true, 00:18:51.341 "data_offset": 0, 00:18:51.341 "data_size": 65536 00:18:51.341 }, 00:18:51.341 { 00:18:51.341 "name": "BaseBdev2", 00:18:51.341 "uuid": "4d72b189-ba59-5903-95ed-7e0ef752b7c5", 00:18:51.341 "is_configured": true, 00:18:51.341 "data_offset": 0, 00:18:51.341 "data_size": 65536 00:18:51.341 }, 00:18:51.341 { 00:18:51.341 "name": "BaseBdev3", 00:18:51.341 "uuid": "d27c9422-6dd9-5d51-a123-921a868c0556", 00:18:51.341 "is_configured": true, 00:18:51.341 "data_offset": 0, 00:18:51.341 "data_size": 65536 00:18:51.341 }, 00:18:51.341 { 00:18:51.341 "name": "BaseBdev4", 00:18:51.341 "uuid": "1a560a11-40c4-5d9e-bbc4-d314acc92eb9", 00:18:51.341 "is_configured": true, 00:18:51.341 "data_offset": 0, 00:18:51.341 "data_size": 65536 00:18:51.341 } 00:18:51.341 ] 00:18:51.341 }' 00:18:51.341 14:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.341 14:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:51.341 14:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.341 14:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:51.341 14:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@666 -- # '[' false = true ']' 00:18:51.341 14:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:18:51.341 14:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:18:51.341 14:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@706 -- # local timeout=675 00:18:51.341 14:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:51.341 14:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:51.341 14:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:51.341 14:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:51.341 14:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:51.341 14:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:51.341 14:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:51.341 14:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:51.341 14:18:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.341 14:18:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:51.599 14:18:28 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.599 14:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:51.599 "name": "raid_bdev1", 00:18:51.599 "uuid": "859122f1-ac4d-42e9-98e2-9e08cfcbf6fb", 00:18:51.599 "strip_size_kb": 64, 00:18:51.599 "state": "online", 00:18:51.599 "raid_level": "raid5f", 00:18:51.599 "superblock": false, 00:18:51.599 "num_base_bdevs": 4, 00:18:51.599 "num_base_bdevs_discovered": 4, 00:18:51.599 "num_base_bdevs_operational": 4, 00:18:51.599 "process": { 00:18:51.599 "type": "rebuild", 00:18:51.599 "target": "spare", 00:18:51.599 "progress": { 00:18:51.599 "blocks": 21120, 00:18:51.599 "percent": 10 00:18:51.599 } 00:18:51.599 }, 00:18:51.599 "base_bdevs_list": [ 00:18:51.599 { 00:18:51.599 "name": "spare", 00:18:51.599 "uuid": "ad9664ab-c1c7-50bc-9de5-987ce9b1d1c0", 00:18:51.599 "is_configured": true, 00:18:51.599 "data_offset": 0, 00:18:51.599 "data_size": 65536 00:18:51.599 }, 00:18:51.599 { 00:18:51.599 "name": "BaseBdev2", 00:18:51.599 "uuid": "4d72b189-ba59-5903-95ed-7e0ef752b7c5", 00:18:51.599 "is_configured": true, 00:18:51.599 "data_offset": 0, 00:18:51.599 "data_size": 65536 00:18:51.599 }, 00:18:51.599 { 00:18:51.599 "name": "BaseBdev3", 00:18:51.599 "uuid": "d27c9422-6dd9-5d51-a123-921a868c0556", 00:18:51.599 "is_configured": true, 00:18:51.599 "data_offset": 0, 00:18:51.599 "data_size": 65536 00:18:51.599 }, 00:18:51.599 { 00:18:51.599 "name": "BaseBdev4", 00:18:51.599 "uuid": "1a560a11-40c4-5d9e-bbc4-d314acc92eb9", 00:18:51.599 "is_configured": true, 00:18:51.599 "data_offset": 0, 00:18:51.599 "data_size": 65536 00:18:51.599 } 00:18:51.599 ] 00:18:51.599 }' 00:18:51.599 14:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:51.600 14:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:51.600 14:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:51.600 14:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:51.600 14:18:28 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:52.534 14:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:52.534 14:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:52.534 14:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:52.534 14:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:52.534 14:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:52.534 14:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:52.534 14:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:52.534 14:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:52.534 14:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.534 14:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:52.534 14:18:29 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.793 14:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:52.793 "name": "raid_bdev1", 00:18:52.793 "uuid": "859122f1-ac4d-42e9-98e2-9e08cfcbf6fb", 00:18:52.793 "strip_size_kb": 64, 00:18:52.793 "state": "online", 00:18:52.793 "raid_level": "raid5f", 00:18:52.793 "superblock": false, 00:18:52.793 "num_base_bdevs": 4, 00:18:52.793 "num_base_bdevs_discovered": 4, 00:18:52.793 "num_base_bdevs_operational": 4, 00:18:52.793 "process": { 00:18:52.793 "type": "rebuild", 00:18:52.793 "target": "spare", 00:18:52.793 "progress": { 00:18:52.793 "blocks": 44160, 00:18:52.793 "percent": 22 00:18:52.793 } 00:18:52.793 }, 00:18:52.793 "base_bdevs_list": [ 00:18:52.793 { 00:18:52.793 "name": "spare", 00:18:52.793 "uuid": "ad9664ab-c1c7-50bc-9de5-987ce9b1d1c0", 00:18:52.793 "is_configured": true, 00:18:52.793 "data_offset": 0, 00:18:52.793 "data_size": 65536 00:18:52.793 }, 00:18:52.793 { 00:18:52.793 "name": "BaseBdev2", 00:18:52.793 "uuid": "4d72b189-ba59-5903-95ed-7e0ef752b7c5", 00:18:52.793 "is_configured": true, 00:18:52.793 "data_offset": 0, 00:18:52.793 "data_size": 65536 00:18:52.793 }, 00:18:52.793 { 00:18:52.793 "name": "BaseBdev3", 00:18:52.793 "uuid": "d27c9422-6dd9-5d51-a123-921a868c0556", 00:18:52.793 "is_configured": true, 00:18:52.793 "data_offset": 0, 00:18:52.793 "data_size": 65536 00:18:52.793 }, 00:18:52.793 { 00:18:52.793 "name": "BaseBdev4", 00:18:52.793 "uuid": "1a560a11-40c4-5d9e-bbc4-d314acc92eb9", 00:18:52.793 "is_configured": true, 00:18:52.793 "data_offset": 0, 00:18:52.793 "data_size": 65536 00:18:52.793 } 00:18:52.793 ] 00:18:52.793 }' 00:18:52.793 14:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:52.793 14:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:52.793 14:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:52.793 14:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:52.793 14:18:29 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:53.730 14:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:53.730 14:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:53.730 14:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:53.730 14:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:53.730 14:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:53.730 14:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:53.730 14:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:53.730 14:18:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.730 14:18:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:53.730 14:18:30 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:53.730 14:18:30 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.730 14:18:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:53.730 "name": "raid_bdev1", 00:18:53.730 "uuid": "859122f1-ac4d-42e9-98e2-9e08cfcbf6fb", 00:18:53.730 "strip_size_kb": 64, 00:18:53.730 "state": "online", 00:18:53.730 "raid_level": "raid5f", 00:18:53.730 "superblock": false, 00:18:53.730 "num_base_bdevs": 4, 00:18:53.730 "num_base_bdevs_discovered": 4, 00:18:53.730 "num_base_bdevs_operational": 4, 00:18:53.730 "process": { 00:18:53.730 "type": "rebuild", 00:18:53.730 "target": "spare", 00:18:53.730 "progress": { 00:18:53.730 "blocks": 65280, 00:18:53.730 "percent": 33 00:18:53.730 } 00:18:53.730 }, 00:18:53.730 "base_bdevs_list": [ 00:18:53.730 { 00:18:53.730 "name": "spare", 00:18:53.730 "uuid": "ad9664ab-c1c7-50bc-9de5-987ce9b1d1c0", 00:18:53.730 "is_configured": true, 00:18:53.730 "data_offset": 0, 00:18:53.730 "data_size": 65536 00:18:53.730 }, 00:18:53.730 { 00:18:53.730 "name": "BaseBdev2", 00:18:53.730 "uuid": "4d72b189-ba59-5903-95ed-7e0ef752b7c5", 00:18:53.730 "is_configured": true, 00:18:53.730 "data_offset": 0, 00:18:53.730 "data_size": 65536 00:18:53.730 }, 00:18:53.730 { 00:18:53.730 "name": "BaseBdev3", 00:18:53.730 "uuid": "d27c9422-6dd9-5d51-a123-921a868c0556", 00:18:53.730 "is_configured": true, 00:18:53.730 "data_offset": 0, 00:18:53.730 "data_size": 65536 00:18:53.730 }, 00:18:53.730 { 00:18:53.730 "name": "BaseBdev4", 00:18:53.730 "uuid": "1a560a11-40c4-5d9e-bbc4-d314acc92eb9", 00:18:53.730 "is_configured": true, 00:18:53.730 "data_offset": 0, 00:18:53.730 "data_size": 65536 00:18:53.730 } 00:18:53.730 ] 00:18:53.730 }' 00:18:53.730 14:18:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:53.988 14:18:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:53.988 14:18:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:53.988 14:18:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:53.988 14:18:31 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:54.923 14:18:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:54.923 14:18:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:54.923 14:18:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:54.923 14:18:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:54.923 14:18:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:54.923 14:18:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:54.923 14:18:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:54.923 14:18:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:54.923 14:18:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.923 14:18:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:54.923 14:18:32 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.923 14:18:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:54.923 "name": "raid_bdev1", 00:18:54.923 "uuid": "859122f1-ac4d-42e9-98e2-9e08cfcbf6fb", 00:18:54.923 "strip_size_kb": 64, 00:18:54.923 "state": "online", 00:18:54.923 "raid_level": "raid5f", 00:18:54.923 "superblock": false, 00:18:54.923 "num_base_bdevs": 4, 00:18:54.923 "num_base_bdevs_discovered": 4, 00:18:54.923 "num_base_bdevs_operational": 4, 00:18:54.924 "process": { 00:18:54.924 "type": "rebuild", 00:18:54.924 "target": "spare", 00:18:54.924 "progress": { 00:18:54.924 "blocks": 88320, 00:18:54.924 "percent": 44 00:18:54.924 } 00:18:54.924 }, 00:18:54.924 "base_bdevs_list": [ 00:18:54.924 { 00:18:54.924 "name": "spare", 00:18:54.924 "uuid": "ad9664ab-c1c7-50bc-9de5-987ce9b1d1c0", 00:18:54.924 "is_configured": true, 00:18:54.924 "data_offset": 0, 00:18:54.924 "data_size": 65536 00:18:54.924 }, 00:18:54.924 { 00:18:54.924 "name": "BaseBdev2", 00:18:54.924 "uuid": "4d72b189-ba59-5903-95ed-7e0ef752b7c5", 00:18:54.924 "is_configured": true, 00:18:54.924 "data_offset": 0, 00:18:54.924 "data_size": 65536 00:18:54.924 }, 00:18:54.924 { 00:18:54.924 "name": "BaseBdev3", 00:18:54.924 "uuid": "d27c9422-6dd9-5d51-a123-921a868c0556", 00:18:54.924 "is_configured": true, 00:18:54.924 "data_offset": 0, 00:18:54.924 "data_size": 65536 00:18:54.924 }, 00:18:54.924 { 00:18:54.924 "name": "BaseBdev4", 00:18:54.924 "uuid": "1a560a11-40c4-5d9e-bbc4-d314acc92eb9", 00:18:54.924 "is_configured": true, 00:18:54.924 "data_offset": 0, 00:18:54.924 "data_size": 65536 00:18:54.924 } 00:18:54.924 ] 00:18:54.924 }' 00:18:54.924 14:18:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:55.182 14:18:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:55.182 14:18:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:55.182 14:18:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:55.182 14:18:32 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:56.120 14:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:56.120 14:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:56.120 14:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:56.120 14:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:56.120 14:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:56.120 14:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:56.120 14:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:56.120 14:18:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.120 14:18:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:56.120 14:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:56.120 14:18:33 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.120 14:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:56.120 "name": "raid_bdev1", 00:18:56.120 "uuid": "859122f1-ac4d-42e9-98e2-9e08cfcbf6fb", 00:18:56.120 "strip_size_kb": 64, 00:18:56.120 "state": "online", 00:18:56.120 "raid_level": "raid5f", 00:18:56.120 "superblock": false, 00:18:56.120 "num_base_bdevs": 4, 00:18:56.120 "num_base_bdevs_discovered": 4, 00:18:56.120 "num_base_bdevs_operational": 4, 00:18:56.120 "process": { 00:18:56.120 "type": "rebuild", 00:18:56.120 "target": "spare", 00:18:56.120 "progress": { 00:18:56.120 "blocks": 109440, 00:18:56.120 "percent": 55 00:18:56.120 } 00:18:56.120 }, 00:18:56.120 "base_bdevs_list": [ 00:18:56.120 { 00:18:56.120 "name": "spare", 00:18:56.120 "uuid": "ad9664ab-c1c7-50bc-9de5-987ce9b1d1c0", 00:18:56.120 "is_configured": true, 00:18:56.120 "data_offset": 0, 00:18:56.120 "data_size": 65536 00:18:56.120 }, 00:18:56.120 { 00:18:56.120 "name": "BaseBdev2", 00:18:56.120 "uuid": "4d72b189-ba59-5903-95ed-7e0ef752b7c5", 00:18:56.120 "is_configured": true, 00:18:56.120 "data_offset": 0, 00:18:56.120 "data_size": 65536 00:18:56.120 }, 00:18:56.120 { 00:18:56.120 "name": "BaseBdev3", 00:18:56.120 "uuid": "d27c9422-6dd9-5d51-a123-921a868c0556", 00:18:56.120 "is_configured": true, 00:18:56.120 "data_offset": 0, 00:18:56.120 "data_size": 65536 00:18:56.120 }, 00:18:56.120 { 00:18:56.120 "name": "BaseBdev4", 00:18:56.120 "uuid": "1a560a11-40c4-5d9e-bbc4-d314acc92eb9", 00:18:56.120 "is_configured": true, 00:18:56.120 "data_offset": 0, 00:18:56.120 "data_size": 65536 00:18:56.120 } 00:18:56.120 ] 00:18:56.120 }' 00:18:56.120 14:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:56.120 14:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:56.120 14:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:56.380 14:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:56.380 14:18:33 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:57.403 14:18:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:57.403 14:18:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:57.403 14:18:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:57.403 14:18:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:57.403 14:18:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:57.403 14:18:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:57.403 14:18:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:57.403 14:18:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:57.403 14:18:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.403 14:18:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:57.403 14:18:34 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.403 14:18:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:57.403 "name": "raid_bdev1", 00:18:57.403 "uuid": "859122f1-ac4d-42e9-98e2-9e08cfcbf6fb", 00:18:57.403 "strip_size_kb": 64, 00:18:57.403 "state": "online", 00:18:57.403 "raid_level": "raid5f", 00:18:57.403 "superblock": false, 00:18:57.403 "num_base_bdevs": 4, 00:18:57.403 "num_base_bdevs_discovered": 4, 00:18:57.403 "num_base_bdevs_operational": 4, 00:18:57.403 "process": { 00:18:57.403 "type": "rebuild", 00:18:57.403 "target": "spare", 00:18:57.403 "progress": { 00:18:57.403 "blocks": 132480, 00:18:57.403 "percent": 67 00:18:57.403 } 00:18:57.403 }, 00:18:57.403 "base_bdevs_list": [ 00:18:57.403 { 00:18:57.403 "name": "spare", 00:18:57.403 "uuid": "ad9664ab-c1c7-50bc-9de5-987ce9b1d1c0", 00:18:57.403 "is_configured": true, 00:18:57.403 "data_offset": 0, 00:18:57.403 "data_size": 65536 00:18:57.403 }, 00:18:57.403 { 00:18:57.403 "name": "BaseBdev2", 00:18:57.403 "uuid": "4d72b189-ba59-5903-95ed-7e0ef752b7c5", 00:18:57.403 "is_configured": true, 00:18:57.403 "data_offset": 0, 00:18:57.403 "data_size": 65536 00:18:57.403 }, 00:18:57.403 { 00:18:57.403 "name": "BaseBdev3", 00:18:57.403 "uuid": "d27c9422-6dd9-5d51-a123-921a868c0556", 00:18:57.403 "is_configured": true, 00:18:57.403 "data_offset": 0, 00:18:57.403 "data_size": 65536 00:18:57.403 }, 00:18:57.403 { 00:18:57.403 "name": "BaseBdev4", 00:18:57.403 "uuid": "1a560a11-40c4-5d9e-bbc4-d314acc92eb9", 00:18:57.403 "is_configured": true, 00:18:57.403 "data_offset": 0, 00:18:57.403 "data_size": 65536 00:18:57.403 } 00:18:57.403 ] 00:18:57.403 }' 00:18:57.403 14:18:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:57.403 14:18:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:57.403 14:18:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:57.403 14:18:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:57.403 14:18:34 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:58.339 14:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:58.339 14:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:58.339 14:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:58.339 14:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:58.339 14:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:58.339 14:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:58.340 14:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:58.340 14:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:58.340 14:18:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.340 14:18:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:58.598 14:18:35 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.598 14:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:58.598 "name": "raid_bdev1", 00:18:58.598 "uuid": "859122f1-ac4d-42e9-98e2-9e08cfcbf6fb", 00:18:58.598 "strip_size_kb": 64, 00:18:58.598 "state": "online", 00:18:58.598 "raid_level": "raid5f", 00:18:58.598 "superblock": false, 00:18:58.598 "num_base_bdevs": 4, 00:18:58.598 "num_base_bdevs_discovered": 4, 00:18:58.598 "num_base_bdevs_operational": 4, 00:18:58.598 "process": { 00:18:58.598 "type": "rebuild", 00:18:58.598 "target": "spare", 00:18:58.598 "progress": { 00:18:58.598 "blocks": 153600, 00:18:58.598 "percent": 78 00:18:58.598 } 00:18:58.598 }, 00:18:58.598 "base_bdevs_list": [ 00:18:58.598 { 00:18:58.598 "name": "spare", 00:18:58.598 "uuid": "ad9664ab-c1c7-50bc-9de5-987ce9b1d1c0", 00:18:58.598 "is_configured": true, 00:18:58.598 "data_offset": 0, 00:18:58.598 "data_size": 65536 00:18:58.598 }, 00:18:58.598 { 00:18:58.598 "name": "BaseBdev2", 00:18:58.598 "uuid": "4d72b189-ba59-5903-95ed-7e0ef752b7c5", 00:18:58.598 "is_configured": true, 00:18:58.598 "data_offset": 0, 00:18:58.598 "data_size": 65536 00:18:58.598 }, 00:18:58.598 { 00:18:58.598 "name": "BaseBdev3", 00:18:58.598 "uuid": "d27c9422-6dd9-5d51-a123-921a868c0556", 00:18:58.598 "is_configured": true, 00:18:58.598 "data_offset": 0, 00:18:58.598 "data_size": 65536 00:18:58.598 }, 00:18:58.598 { 00:18:58.598 "name": "BaseBdev4", 00:18:58.598 "uuid": "1a560a11-40c4-5d9e-bbc4-d314acc92eb9", 00:18:58.598 "is_configured": true, 00:18:58.598 "data_offset": 0, 00:18:58.598 "data_size": 65536 00:18:58.598 } 00:18:58.598 ] 00:18:58.598 }' 00:18:58.598 14:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:58.598 14:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:58.598 14:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:58.598 14:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:58.598 14:18:35 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:18:59.535 14:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:18:59.535 14:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:59.535 14:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:18:59.535 14:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:18:59.535 14:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:18:59.535 14:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:18:59.535 14:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:18:59.535 14:18:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.535 14:18:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:18:59.535 14:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:59.535 14:18:36 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.794 14:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:18:59.794 "name": "raid_bdev1", 00:18:59.794 "uuid": "859122f1-ac4d-42e9-98e2-9e08cfcbf6fb", 00:18:59.794 "strip_size_kb": 64, 00:18:59.794 "state": "online", 00:18:59.794 "raid_level": "raid5f", 00:18:59.794 "superblock": false, 00:18:59.794 "num_base_bdevs": 4, 00:18:59.794 "num_base_bdevs_discovered": 4, 00:18:59.794 "num_base_bdevs_operational": 4, 00:18:59.794 "process": { 00:18:59.794 "type": "rebuild", 00:18:59.794 "target": "spare", 00:18:59.794 "progress": { 00:18:59.794 "blocks": 176640, 00:18:59.794 "percent": 89 00:18:59.794 } 00:18:59.794 }, 00:18:59.794 "base_bdevs_list": [ 00:18:59.794 { 00:18:59.794 "name": "spare", 00:18:59.794 "uuid": "ad9664ab-c1c7-50bc-9de5-987ce9b1d1c0", 00:18:59.794 "is_configured": true, 00:18:59.794 "data_offset": 0, 00:18:59.794 "data_size": 65536 00:18:59.794 }, 00:18:59.794 { 00:18:59.794 "name": "BaseBdev2", 00:18:59.794 "uuid": "4d72b189-ba59-5903-95ed-7e0ef752b7c5", 00:18:59.794 "is_configured": true, 00:18:59.794 "data_offset": 0, 00:18:59.794 "data_size": 65536 00:18:59.794 }, 00:18:59.794 { 00:18:59.794 "name": "BaseBdev3", 00:18:59.794 "uuid": "d27c9422-6dd9-5d51-a123-921a868c0556", 00:18:59.794 "is_configured": true, 00:18:59.794 "data_offset": 0, 00:18:59.794 "data_size": 65536 00:18:59.794 }, 00:18:59.794 { 00:18:59.794 "name": "BaseBdev4", 00:18:59.794 "uuid": "1a560a11-40c4-5d9e-bbc4-d314acc92eb9", 00:18:59.794 "is_configured": true, 00:18:59.794 "data_offset": 0, 00:18:59.794 "data_size": 65536 00:18:59.794 } 00:18:59.794 ] 00:18:59.794 }' 00:18:59.794 14:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:18:59.794 14:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:59.794 14:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:18:59.794 14:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:18:59.794 14:18:36 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:00.731 [2024-11-27 14:18:37.858260] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:00.731 [2024-11-27 14:18:37.858395] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:00.731 [2024-11-27 14:18:37.858474] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:00.732 14:18:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:00.732 14:18:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:00.732 14:18:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:00.732 14:18:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:00.732 14:18:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:00.732 14:18:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:00.732 14:18:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.732 14:18:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.732 14:18:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.732 14:18:37 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.732 14:18:37 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.991 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:00.991 "name": "raid_bdev1", 00:19:00.991 "uuid": "859122f1-ac4d-42e9-98e2-9e08cfcbf6fb", 00:19:00.991 "strip_size_kb": 64, 00:19:00.991 "state": "online", 00:19:00.991 "raid_level": "raid5f", 00:19:00.991 "superblock": false, 00:19:00.991 "num_base_bdevs": 4, 00:19:00.991 "num_base_bdevs_discovered": 4, 00:19:00.991 "num_base_bdevs_operational": 4, 00:19:00.991 "base_bdevs_list": [ 00:19:00.991 { 00:19:00.991 "name": "spare", 00:19:00.991 "uuid": "ad9664ab-c1c7-50bc-9de5-987ce9b1d1c0", 00:19:00.991 "is_configured": true, 00:19:00.991 "data_offset": 0, 00:19:00.991 "data_size": 65536 00:19:00.991 }, 00:19:00.991 { 00:19:00.991 "name": "BaseBdev2", 00:19:00.991 "uuid": "4d72b189-ba59-5903-95ed-7e0ef752b7c5", 00:19:00.991 "is_configured": true, 00:19:00.991 "data_offset": 0, 00:19:00.991 "data_size": 65536 00:19:00.991 }, 00:19:00.991 { 00:19:00.991 "name": "BaseBdev3", 00:19:00.991 "uuid": "d27c9422-6dd9-5d51-a123-921a868c0556", 00:19:00.991 "is_configured": true, 00:19:00.991 "data_offset": 0, 00:19:00.991 "data_size": 65536 00:19:00.991 }, 00:19:00.991 { 00:19:00.991 "name": "BaseBdev4", 00:19:00.991 "uuid": "1a560a11-40c4-5d9e-bbc4-d314acc92eb9", 00:19:00.991 "is_configured": true, 00:19:00.991 "data_offset": 0, 00:19:00.991 "data_size": 65536 00:19:00.991 } 00:19:00.991 ] 00:19:00.991 }' 00:19:00.991 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:00.992 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:00.992 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:00.992 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:00.992 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@709 -- # break 00:19:00.992 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:00.992 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:00.992 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:00.992 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:00.992 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:00.992 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:00.992 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:00.992 14:18:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.992 14:18:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:00.992 14:18:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.992 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:00.992 "name": "raid_bdev1", 00:19:00.992 "uuid": "859122f1-ac4d-42e9-98e2-9e08cfcbf6fb", 00:19:00.992 "strip_size_kb": 64, 00:19:00.992 "state": "online", 00:19:00.992 "raid_level": "raid5f", 00:19:00.992 "superblock": false, 00:19:00.992 "num_base_bdevs": 4, 00:19:00.992 "num_base_bdevs_discovered": 4, 00:19:00.992 "num_base_bdevs_operational": 4, 00:19:00.992 "base_bdevs_list": [ 00:19:00.992 { 00:19:00.992 "name": "spare", 00:19:00.992 "uuid": "ad9664ab-c1c7-50bc-9de5-987ce9b1d1c0", 00:19:00.992 "is_configured": true, 00:19:00.992 "data_offset": 0, 00:19:00.992 "data_size": 65536 00:19:00.992 }, 00:19:00.992 { 00:19:00.992 "name": "BaseBdev2", 00:19:00.992 "uuid": "4d72b189-ba59-5903-95ed-7e0ef752b7c5", 00:19:00.992 "is_configured": true, 00:19:00.992 "data_offset": 0, 00:19:00.992 "data_size": 65536 00:19:00.992 }, 00:19:00.992 { 00:19:00.992 "name": "BaseBdev3", 00:19:00.992 "uuid": "d27c9422-6dd9-5d51-a123-921a868c0556", 00:19:00.992 "is_configured": true, 00:19:00.992 "data_offset": 0, 00:19:00.992 "data_size": 65536 00:19:00.992 }, 00:19:00.992 { 00:19:00.992 "name": "BaseBdev4", 00:19:00.992 "uuid": "1a560a11-40c4-5d9e-bbc4-d314acc92eb9", 00:19:00.992 "is_configured": true, 00:19:00.992 "data_offset": 0, 00:19:00.992 "data_size": 65536 00:19:00.992 } 00:19:00.992 ] 00:19:00.992 }' 00:19:00.992 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:00.992 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:00.992 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:01.306 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:01.306 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:01.306 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:01.306 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:01.306 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:01.306 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:01.306 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:01.306 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:01.306 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:01.306 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:01.306 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:01.306 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:01.306 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.306 14:18:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.306 14:18:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.306 14:18:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.306 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:01.306 "name": "raid_bdev1", 00:19:01.306 "uuid": "859122f1-ac4d-42e9-98e2-9e08cfcbf6fb", 00:19:01.306 "strip_size_kb": 64, 00:19:01.306 "state": "online", 00:19:01.306 "raid_level": "raid5f", 00:19:01.306 "superblock": false, 00:19:01.306 "num_base_bdevs": 4, 00:19:01.306 "num_base_bdevs_discovered": 4, 00:19:01.306 "num_base_bdevs_operational": 4, 00:19:01.306 "base_bdevs_list": [ 00:19:01.306 { 00:19:01.306 "name": "spare", 00:19:01.306 "uuid": "ad9664ab-c1c7-50bc-9de5-987ce9b1d1c0", 00:19:01.306 "is_configured": true, 00:19:01.306 "data_offset": 0, 00:19:01.306 "data_size": 65536 00:19:01.306 }, 00:19:01.306 { 00:19:01.306 "name": "BaseBdev2", 00:19:01.306 "uuid": "4d72b189-ba59-5903-95ed-7e0ef752b7c5", 00:19:01.306 "is_configured": true, 00:19:01.306 "data_offset": 0, 00:19:01.306 "data_size": 65536 00:19:01.306 }, 00:19:01.306 { 00:19:01.306 "name": "BaseBdev3", 00:19:01.306 "uuid": "d27c9422-6dd9-5d51-a123-921a868c0556", 00:19:01.306 "is_configured": true, 00:19:01.306 "data_offset": 0, 00:19:01.306 "data_size": 65536 00:19:01.306 }, 00:19:01.306 { 00:19:01.306 "name": "BaseBdev4", 00:19:01.306 "uuid": "1a560a11-40c4-5d9e-bbc4-d314acc92eb9", 00:19:01.306 "is_configured": true, 00:19:01.306 "data_offset": 0, 00:19:01.306 "data_size": 65536 00:19:01.306 } 00:19:01.306 ] 00:19:01.306 }' 00:19:01.306 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:01.306 14:18:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.565 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:01.565 14:18:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.565 14:18:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.565 [2024-11-27 14:18:38.799197] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:01.565 [2024-11-27 14:18:38.799259] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:01.565 [2024-11-27 14:18:38.799357] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:01.565 [2024-11-27 14:18:38.799467] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:01.565 [2024-11-27 14:18:38.799515] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:01.565 14:18:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.565 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:01.565 14:18:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.565 14:18:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:01.565 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # jq length 00:19:01.565 14:18:38 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.824 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:01.824 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:01.824 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:01.824 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:01.824 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:01.824 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:01.824 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:01.824 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:01.824 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:01.824 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@12 -- # local i 00:19:01.824 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:01.824 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:01.824 14:18:38 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:02.081 /dev/nbd0 00:19:02.081 14:18:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:02.081 14:18:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:02.081 14:18:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:02.081 14:18:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:02.081 14:18:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:02.081 14:18:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:02.081 14:18:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:02.081 14:18:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:02.081 14:18:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:02.081 14:18:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:02.081 14:18:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:02.081 1+0 records in 00:19:02.081 1+0 records out 00:19:02.082 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000315557 s, 13.0 MB/s 00:19:02.082 14:18:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:02.082 14:18:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:02.082 14:18:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:02.082 14:18:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:02.082 14:18:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:02.082 14:18:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:02.082 14:18:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:02.082 14:18:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:02.340 /dev/nbd1 00:19:02.340 14:18:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:02.340 14:18:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:02.340 14:18:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:02.340 14:18:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@873 -- # local i 00:19:02.340 14:18:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:02.340 14:18:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:02.340 14:18:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:02.340 14:18:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@877 -- # break 00:19:02.340 14:18:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:02.340 14:18:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:02.340 14:18:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:02.340 1+0 records in 00:19:02.340 1+0 records out 00:19:02.340 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00048622 s, 8.4 MB/s 00:19:02.340 14:18:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:02.340 14:18:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@890 -- # size=4096 00:19:02.340 14:18:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:02.340 14:18:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:02.340 14:18:39 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@893 -- # return 0 00:19:02.340 14:18:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:02.340 14:18:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:02.340 14:18:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@738 -- # cmp -i 0 /dev/nbd0 /dev/nbd1 00:19:02.613 14:18:39 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:02.613 14:18:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:02.613 14:18:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:02.613 14:18:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:02.613 14:18:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@51 -- # local i 00:19:02.613 14:18:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:02.613 14:18:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:02.874 14:18:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:02.874 14:18:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:02.874 14:18:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:02.874 14:18:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:02.874 14:18:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:02.874 14:18:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:02.874 14:18:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:02.874 14:18:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:02.874 14:18:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:02.874 14:18:39 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:03.133 14:18:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:03.133 14:18:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:03.133 14:18:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:03.133 14:18:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:03.133 14:18:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:03.133 14:18:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:03.133 14:18:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@41 -- # break 00:19:03.133 14:18:40 bdev_raid.raid5f_rebuild_test -- bdev/nbd_common.sh@45 -- # return 0 00:19:03.133 14:18:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@743 -- # '[' false = true ']' 00:19:03.133 14:18:40 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@784 -- # killprocess 84915 00:19:03.133 14:18:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@954 -- # '[' -z 84915 ']' 00:19:03.133 14:18:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@958 -- # kill -0 84915 00:19:03.133 14:18:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # uname 00:19:03.133 14:18:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:03.133 14:18:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84915 00:19:03.133 killing process with pid 84915 00:19:03.133 Received shutdown signal, test time was about 60.000000 seconds 00:19:03.133 00:19:03.133 Latency(us) 00:19:03.133 [2024-11-27T14:18:40.411Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.133 [2024-11-27T14:18:40.411Z] =================================================================================================================== 00:19:03.133 [2024-11-27T14:18:40.411Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:03.133 14:18:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:03.133 14:18:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:03.133 14:18:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84915' 00:19:03.133 14:18:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@973 -- # kill 84915 00:19:03.133 [2024-11-27 14:18:40.324690] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:03.133 14:18:40 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@978 -- # wait 84915 00:19:03.701 [2024-11-27 14:18:40.741030] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test -- bdev/bdev_raid.sh@786 -- # return 0 00:19:04.639 00:19:04.639 real 0m20.085s 00:19:04.639 user 0m24.974s 00:19:04.639 sys 0m2.317s 00:19:04.639 ************************************ 00:19:04.639 END TEST raid5f_rebuild_test 00:19:04.639 ************************************ 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test -- common/autotest_common.sh@10 -- # set +x 00:19:04.639 14:18:41 bdev_raid -- bdev/bdev_raid.sh@991 -- # run_test raid5f_rebuild_test_sb raid_rebuild_test raid5f 4 true false true 00:19:04.639 14:18:41 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:04.639 14:18:41 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:04.639 14:18:41 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:04.639 ************************************ 00:19:04.639 START TEST raid5f_rebuild_test_sb 00:19:04.639 ************************************ 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid5f 4 true false true 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@569 -- # local raid_level=raid5f 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=4 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev3 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # echo BaseBdev4 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@581 -- # '[' raid5f '!=' raid1 ']' 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@582 -- # '[' false = true ']' 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@586 -- # strip_size=64 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@587 -- # create_arg+=' -z 64' 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@597 -- # raid_pid=85420 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@598 -- # waitforlisten 85420 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@835 -- # '[' -z 85420 ']' 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:04.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:04.639 14:18:41 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:04.639 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:04.639 Zero copy mechanism will not be used. 00:19:04.639 [2024-11-27 14:18:41.906883] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:19:04.639 [2024-11-27 14:18:41.907090] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85420 ] 00:19:04.899 [2024-11-27 14:18:42.090528] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.157 [2024-11-27 14:18:42.220291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:05.157 [2024-11-27 14:18:42.409943] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:05.157 [2024-11-27 14:18:42.410016] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:05.726 14:18:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.726 14:18:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@868 -- # return 0 00:19:05.726 14:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:05.726 14:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:19:05.726 14:18:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.726 14:18:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.726 BaseBdev1_malloc 00:19:05.726 14:18:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.726 14:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:05.726 14:18:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.726 14:18:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.726 [2024-11-27 14:18:42.922025] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:05.726 [2024-11-27 14:18:42.922112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.726 [2024-11-27 14:18:42.922141] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:05.726 [2024-11-27 14:18:42.922173] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.726 [2024-11-27 14:18:42.924920] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.726 [2024-11-27 14:18:42.924966] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:05.726 BaseBdev1 00:19:05.726 14:18:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.726 14:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:05.726 14:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:19:05.726 14:18:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.726 14:18:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.726 BaseBdev2_malloc 00:19:05.726 14:18:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.726 14:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:05.726 14:18:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.726 14:18:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.726 [2024-11-27 14:18:42.976568] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:05.726 [2024-11-27 14:18:42.976653] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.726 [2024-11-27 14:18:42.976686] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:05.726 [2024-11-27 14:18:42.976703] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.726 [2024-11-27 14:18:42.979661] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.726 [2024-11-27 14:18:42.979705] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:05.726 BaseBdev2 00:19:05.726 14:18:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.726 14:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:05.726 14:18:42 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:19:05.726 14:18:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.726 14:18:42 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.986 BaseBdev3_malloc 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev3_malloc -p BaseBdev3 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.986 [2024-11-27 14:18:43.047757] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev3_malloc 00:19:05.986 [2024-11-27 14:18:43.047848] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.986 [2024-11-27 14:18:43.047881] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:05.986 [2024-11-27 14:18:43.047900] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.986 [2024-11-27 14:18:43.050633] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.986 [2024-11-27 14:18:43.050685] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:19:05.986 BaseBdev3 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.986 BaseBdev4_malloc 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev4_malloc -p BaseBdev4 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.986 [2024-11-27 14:18:43.103605] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev4_malloc 00:19:05.986 [2024-11-27 14:18:43.103897] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.986 [2024-11-27 14:18:43.103937] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:19:05.986 [2024-11-27 14:18:43.103956] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.986 [2024-11-27 14:18:43.106771] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.986 [2024-11-27 14:18:43.106834] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:19:05.986 BaseBdev4 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 512 -b spare_malloc 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.986 spare_malloc 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.986 spare_delay 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.986 [2024-11-27 14:18:43.164910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:05.986 [2024-11-27 14:18:43.164975] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:05.986 [2024-11-27 14:18:43.165002] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:19:05.986 [2024-11-27 14:18:43.165018] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:05.986 [2024-11-27 14:18:43.167820] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:05.986 [2024-11-27 14:18:43.167869] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:05.986 spare 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -z 64 -s -r raid5f -b ''\''BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4'\''' -n raid_bdev1 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.986 [2024-11-27 14:18:43.176984] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:05.986 [2024-11-27 14:18:43.179469] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:05.986 [2024-11-27 14:18:43.179702] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:05.986 [2024-11-27 14:18:43.179832] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:05.986 [2024-11-27 14:18:43.180095] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:05.986 [2024-11-27 14:18:43.180118] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:05.986 [2024-11-27 14:18:43.180460] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:05.986 [2024-11-27 14:18:43.187288] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:05.986 [2024-11-27 14:18:43.187456] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:05.986 [2024-11-27 14:18:43.187865] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:05.986 "name": "raid_bdev1", 00:19:05.986 "uuid": "bd428490-0891-4f3f-8471-c0b44331e3ca", 00:19:05.986 "strip_size_kb": 64, 00:19:05.986 "state": "online", 00:19:05.986 "raid_level": "raid5f", 00:19:05.986 "superblock": true, 00:19:05.986 "num_base_bdevs": 4, 00:19:05.986 "num_base_bdevs_discovered": 4, 00:19:05.986 "num_base_bdevs_operational": 4, 00:19:05.986 "base_bdevs_list": [ 00:19:05.986 { 00:19:05.986 "name": "BaseBdev1", 00:19:05.986 "uuid": "0b31857e-63ed-5326-b3a4-eddc753d943e", 00:19:05.986 "is_configured": true, 00:19:05.986 "data_offset": 2048, 00:19:05.986 "data_size": 63488 00:19:05.986 }, 00:19:05.986 { 00:19:05.986 "name": "BaseBdev2", 00:19:05.986 "uuid": "deca87e1-aa0d-509b-b6a1-9cf67d35e39e", 00:19:05.986 "is_configured": true, 00:19:05.986 "data_offset": 2048, 00:19:05.986 "data_size": 63488 00:19:05.986 }, 00:19:05.986 { 00:19:05.986 "name": "BaseBdev3", 00:19:05.986 "uuid": "0d565e59-0d1f-589e-89e8-e02b82ef658d", 00:19:05.986 "is_configured": true, 00:19:05.986 "data_offset": 2048, 00:19:05.986 "data_size": 63488 00:19:05.986 }, 00:19:05.986 { 00:19:05.986 "name": "BaseBdev4", 00:19:05.986 "uuid": "def5fc4a-78ea-51a1-80d1-481791164be7", 00:19:05.986 "is_configured": true, 00:19:05.986 "data_offset": 2048, 00:19:05.986 "data_size": 63488 00:19:05.986 } 00:19:05.986 ] 00:19:05.986 }' 00:19:05.986 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:05.987 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.554 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:06.554 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:06.554 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.554 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.554 [2024-11-27 14:18:43.748181] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:06.554 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.554 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=190464 00:19:06.554 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:06.554 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:06.554 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.554 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:06.554 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.813 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@619 -- # data_offset=2048 00:19:06.813 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:06.813 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:06.813 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:06.813 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:06.813 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:06.813 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:06.813 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:06.813 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:06.813 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:06.813 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:06.813 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:06.813 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:06.813 14:18:43 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:07.073 [2024-11-27 14:18:44.152056] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:19:07.073 /dev/nbd0 00:19:07.073 14:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:07.073 14:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:07.073 14:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:07.073 14:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:07.073 14:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:07.073 14:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:07.073 14:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:07.073 14:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:07.073 14:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:07.073 14:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:07.073 14:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:07.073 1+0 records in 00:19:07.073 1+0 records out 00:19:07.073 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000272544 s, 15.0 MB/s 00:19:07.073 14:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:07.073 14:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:07.073 14:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:07.073 14:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:07.073 14:18:44 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:07.073 14:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:07.073 14:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:07.073 14:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@629 -- # '[' raid5f = raid5f ']' 00:19:07.073 14:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@630 -- # write_unit_size=384 00:19:07.073 14:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@631 -- # echo 192 00:19:07.073 14:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=196608 count=496 oflag=direct 00:19:07.641 496+0 records in 00:19:07.641 496+0 records out 00:19:07.641 97517568 bytes (98 MB, 93 MiB) copied, 0.615928 s, 158 MB/s 00:19:07.641 14:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:07.641 14:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:07.641 14:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:07.641 14:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:07.641 14:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:07.641 14:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:07.641 14:18:44 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:07.900 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:07.900 [2024-11-27 14:18:45.133998] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:07.900 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:07.900 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:07.900 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:07.900 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:07.900 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:07.900 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:07.900 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:07.900 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:07.900 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.900 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:07.900 [2024-11-27 14:18:45.153739] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:07.900 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.900 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:07.900 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:07.900 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:07.900 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:07.900 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:07.900 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:07.900 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:07.900 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:07.900 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:07.900 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:07.900 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:07.900 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:07.900 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.900 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.160 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.160 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:08.160 "name": "raid_bdev1", 00:19:08.160 "uuid": "bd428490-0891-4f3f-8471-c0b44331e3ca", 00:19:08.160 "strip_size_kb": 64, 00:19:08.160 "state": "online", 00:19:08.160 "raid_level": "raid5f", 00:19:08.160 "superblock": true, 00:19:08.160 "num_base_bdevs": 4, 00:19:08.160 "num_base_bdevs_discovered": 3, 00:19:08.160 "num_base_bdevs_operational": 3, 00:19:08.161 "base_bdevs_list": [ 00:19:08.161 { 00:19:08.161 "name": null, 00:19:08.161 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:08.161 "is_configured": false, 00:19:08.161 "data_offset": 0, 00:19:08.161 "data_size": 63488 00:19:08.161 }, 00:19:08.161 { 00:19:08.161 "name": "BaseBdev2", 00:19:08.161 "uuid": "deca87e1-aa0d-509b-b6a1-9cf67d35e39e", 00:19:08.161 "is_configured": true, 00:19:08.161 "data_offset": 2048, 00:19:08.161 "data_size": 63488 00:19:08.161 }, 00:19:08.161 { 00:19:08.161 "name": "BaseBdev3", 00:19:08.161 "uuid": "0d565e59-0d1f-589e-89e8-e02b82ef658d", 00:19:08.161 "is_configured": true, 00:19:08.161 "data_offset": 2048, 00:19:08.161 "data_size": 63488 00:19:08.161 }, 00:19:08.161 { 00:19:08.161 "name": "BaseBdev4", 00:19:08.161 "uuid": "def5fc4a-78ea-51a1-80d1-481791164be7", 00:19:08.161 "is_configured": true, 00:19:08.161 "data_offset": 2048, 00:19:08.161 "data_size": 63488 00:19:08.161 } 00:19:08.161 ] 00:19:08.161 }' 00:19:08.161 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:08.161 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.420 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:08.420 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.420 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:08.420 [2024-11-27 14:18:45.681915] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:08.679 [2024-11-27 14:18:45.696990] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002aa50 00:19:08.679 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.679 14:18:45 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:08.679 [2024-11-27 14:18:45.706369] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:09.615 14:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:09.615 14:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:09.615 14:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:09.615 14:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:09.615 14:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:09.615 14:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.615 14:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.615 14:18:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.615 14:18:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.615 14:18:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.615 14:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:09.615 "name": "raid_bdev1", 00:19:09.615 "uuid": "bd428490-0891-4f3f-8471-c0b44331e3ca", 00:19:09.615 "strip_size_kb": 64, 00:19:09.615 "state": "online", 00:19:09.615 "raid_level": "raid5f", 00:19:09.615 "superblock": true, 00:19:09.615 "num_base_bdevs": 4, 00:19:09.615 "num_base_bdevs_discovered": 4, 00:19:09.615 "num_base_bdevs_operational": 4, 00:19:09.615 "process": { 00:19:09.615 "type": "rebuild", 00:19:09.615 "target": "spare", 00:19:09.615 "progress": { 00:19:09.615 "blocks": 17280, 00:19:09.615 "percent": 9 00:19:09.615 } 00:19:09.615 }, 00:19:09.615 "base_bdevs_list": [ 00:19:09.615 { 00:19:09.615 "name": "spare", 00:19:09.615 "uuid": "586f0115-5976-5082-9613-e10c09def55a", 00:19:09.615 "is_configured": true, 00:19:09.615 "data_offset": 2048, 00:19:09.615 "data_size": 63488 00:19:09.615 }, 00:19:09.615 { 00:19:09.616 "name": "BaseBdev2", 00:19:09.616 "uuid": "deca87e1-aa0d-509b-b6a1-9cf67d35e39e", 00:19:09.616 "is_configured": true, 00:19:09.616 "data_offset": 2048, 00:19:09.616 "data_size": 63488 00:19:09.616 }, 00:19:09.616 { 00:19:09.616 "name": "BaseBdev3", 00:19:09.616 "uuid": "0d565e59-0d1f-589e-89e8-e02b82ef658d", 00:19:09.616 "is_configured": true, 00:19:09.616 "data_offset": 2048, 00:19:09.616 "data_size": 63488 00:19:09.616 }, 00:19:09.616 { 00:19:09.616 "name": "BaseBdev4", 00:19:09.616 "uuid": "def5fc4a-78ea-51a1-80d1-481791164be7", 00:19:09.616 "is_configured": true, 00:19:09.616 "data_offset": 2048, 00:19:09.616 "data_size": 63488 00:19:09.616 } 00:19:09.616 ] 00:19:09.616 }' 00:19:09.616 14:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:09.616 14:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:09.616 14:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:09.616 14:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:09.616 14:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:09.616 14:18:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.616 14:18:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.616 [2024-11-27 14:18:46.875679] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:09.875 [2024-11-27 14:18:46.919890] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:09.875 [2024-11-27 14:18:46.920047] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:09.875 [2024-11-27 14:18:46.920073] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:09.875 [2024-11-27 14:18:46.920087] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:09.875 14:18:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.875 14:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:09.875 14:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:09.875 14:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:09.875 14:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:09.876 14:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:09.876 14:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:09.876 14:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:09.876 14:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:09.876 14:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:09.876 14:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:09.876 14:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:09.876 14:18:46 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:09.876 14:18:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.876 14:18:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:09.876 14:18:46 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.876 14:18:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:09.876 "name": "raid_bdev1", 00:19:09.876 "uuid": "bd428490-0891-4f3f-8471-c0b44331e3ca", 00:19:09.876 "strip_size_kb": 64, 00:19:09.876 "state": "online", 00:19:09.876 "raid_level": "raid5f", 00:19:09.876 "superblock": true, 00:19:09.876 "num_base_bdevs": 4, 00:19:09.876 "num_base_bdevs_discovered": 3, 00:19:09.876 "num_base_bdevs_operational": 3, 00:19:09.876 "base_bdevs_list": [ 00:19:09.876 { 00:19:09.876 "name": null, 00:19:09.876 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:09.876 "is_configured": false, 00:19:09.876 "data_offset": 0, 00:19:09.876 "data_size": 63488 00:19:09.876 }, 00:19:09.876 { 00:19:09.876 "name": "BaseBdev2", 00:19:09.876 "uuid": "deca87e1-aa0d-509b-b6a1-9cf67d35e39e", 00:19:09.876 "is_configured": true, 00:19:09.876 "data_offset": 2048, 00:19:09.876 "data_size": 63488 00:19:09.876 }, 00:19:09.876 { 00:19:09.876 "name": "BaseBdev3", 00:19:09.876 "uuid": "0d565e59-0d1f-589e-89e8-e02b82ef658d", 00:19:09.876 "is_configured": true, 00:19:09.876 "data_offset": 2048, 00:19:09.876 "data_size": 63488 00:19:09.876 }, 00:19:09.876 { 00:19:09.876 "name": "BaseBdev4", 00:19:09.876 "uuid": "def5fc4a-78ea-51a1-80d1-481791164be7", 00:19:09.876 "is_configured": true, 00:19:09.876 "data_offset": 2048, 00:19:09.876 "data_size": 63488 00:19:09.876 } 00:19:09.876 ] 00:19:09.876 }' 00:19:09.876 14:18:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:09.876 14:18:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.444 14:18:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:10.444 14:18:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:10.444 14:18:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:10.444 14:18:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:10.444 14:18:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:10.444 14:18:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:10.444 14:18:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.444 14:18:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.444 14:18:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:10.444 14:18:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.444 14:18:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:10.444 "name": "raid_bdev1", 00:19:10.444 "uuid": "bd428490-0891-4f3f-8471-c0b44331e3ca", 00:19:10.444 "strip_size_kb": 64, 00:19:10.444 "state": "online", 00:19:10.444 "raid_level": "raid5f", 00:19:10.444 "superblock": true, 00:19:10.444 "num_base_bdevs": 4, 00:19:10.444 "num_base_bdevs_discovered": 3, 00:19:10.444 "num_base_bdevs_operational": 3, 00:19:10.444 "base_bdevs_list": [ 00:19:10.444 { 00:19:10.444 "name": null, 00:19:10.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:10.444 "is_configured": false, 00:19:10.444 "data_offset": 0, 00:19:10.444 "data_size": 63488 00:19:10.444 }, 00:19:10.444 { 00:19:10.444 "name": "BaseBdev2", 00:19:10.444 "uuid": "deca87e1-aa0d-509b-b6a1-9cf67d35e39e", 00:19:10.444 "is_configured": true, 00:19:10.444 "data_offset": 2048, 00:19:10.444 "data_size": 63488 00:19:10.444 }, 00:19:10.444 { 00:19:10.444 "name": "BaseBdev3", 00:19:10.444 "uuid": "0d565e59-0d1f-589e-89e8-e02b82ef658d", 00:19:10.444 "is_configured": true, 00:19:10.444 "data_offset": 2048, 00:19:10.444 "data_size": 63488 00:19:10.444 }, 00:19:10.444 { 00:19:10.444 "name": "BaseBdev4", 00:19:10.444 "uuid": "def5fc4a-78ea-51a1-80d1-481791164be7", 00:19:10.444 "is_configured": true, 00:19:10.444 "data_offset": 2048, 00:19:10.444 "data_size": 63488 00:19:10.444 } 00:19:10.444 ] 00:19:10.444 }' 00:19:10.444 14:18:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:10.445 14:18:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:10.445 14:18:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:10.445 14:18:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:10.445 14:18:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:10.445 14:18:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.445 14:18:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:10.445 [2024-11-27 14:18:47.676740] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:10.445 [2024-11-27 14:18:47.689841] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00002ab20 00:19:10.445 14:18:47 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.445 14:18:47 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:10.445 [2024-11-27 14:18:47.698344] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:11.821 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:11.821 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:11.821 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:11.821 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:11.821 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:11.821 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.821 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.821 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.821 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.821 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.821 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:11.821 "name": "raid_bdev1", 00:19:11.821 "uuid": "bd428490-0891-4f3f-8471-c0b44331e3ca", 00:19:11.821 "strip_size_kb": 64, 00:19:11.821 "state": "online", 00:19:11.821 "raid_level": "raid5f", 00:19:11.821 "superblock": true, 00:19:11.821 "num_base_bdevs": 4, 00:19:11.821 "num_base_bdevs_discovered": 4, 00:19:11.821 "num_base_bdevs_operational": 4, 00:19:11.821 "process": { 00:19:11.821 "type": "rebuild", 00:19:11.821 "target": "spare", 00:19:11.821 "progress": { 00:19:11.821 "blocks": 17280, 00:19:11.821 "percent": 9 00:19:11.821 } 00:19:11.821 }, 00:19:11.821 "base_bdevs_list": [ 00:19:11.821 { 00:19:11.821 "name": "spare", 00:19:11.821 "uuid": "586f0115-5976-5082-9613-e10c09def55a", 00:19:11.821 "is_configured": true, 00:19:11.821 "data_offset": 2048, 00:19:11.821 "data_size": 63488 00:19:11.821 }, 00:19:11.821 { 00:19:11.821 "name": "BaseBdev2", 00:19:11.821 "uuid": "deca87e1-aa0d-509b-b6a1-9cf67d35e39e", 00:19:11.821 "is_configured": true, 00:19:11.821 "data_offset": 2048, 00:19:11.821 "data_size": 63488 00:19:11.821 }, 00:19:11.821 { 00:19:11.821 "name": "BaseBdev3", 00:19:11.821 "uuid": "0d565e59-0d1f-589e-89e8-e02b82ef658d", 00:19:11.821 "is_configured": true, 00:19:11.821 "data_offset": 2048, 00:19:11.821 "data_size": 63488 00:19:11.821 }, 00:19:11.821 { 00:19:11.821 "name": "BaseBdev4", 00:19:11.821 "uuid": "def5fc4a-78ea-51a1-80d1-481791164be7", 00:19:11.821 "is_configured": true, 00:19:11.821 "data_offset": 2048, 00:19:11.821 "data_size": 63488 00:19:11.821 } 00:19:11.821 ] 00:19:11.821 }' 00:19:11.821 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:11.821 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:11.821 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:11.821 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:11.821 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:11.821 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:11.821 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:11.821 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=4 00:19:11.821 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@693 -- # '[' raid5f = raid1 ']' 00:19:11.821 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@706 -- # local timeout=695 00:19:11.821 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:11.821 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:11.821 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:11.821 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:11.821 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:11.821 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:11.821 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:11.821 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.821 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:11.821 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:11.821 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.821 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:11.821 "name": "raid_bdev1", 00:19:11.821 "uuid": "bd428490-0891-4f3f-8471-c0b44331e3ca", 00:19:11.821 "strip_size_kb": 64, 00:19:11.821 "state": "online", 00:19:11.821 "raid_level": "raid5f", 00:19:11.821 "superblock": true, 00:19:11.821 "num_base_bdevs": 4, 00:19:11.821 "num_base_bdevs_discovered": 4, 00:19:11.821 "num_base_bdevs_operational": 4, 00:19:11.821 "process": { 00:19:11.821 "type": "rebuild", 00:19:11.821 "target": "spare", 00:19:11.821 "progress": { 00:19:11.821 "blocks": 21120, 00:19:11.821 "percent": 11 00:19:11.821 } 00:19:11.821 }, 00:19:11.821 "base_bdevs_list": [ 00:19:11.821 { 00:19:11.821 "name": "spare", 00:19:11.821 "uuid": "586f0115-5976-5082-9613-e10c09def55a", 00:19:11.821 "is_configured": true, 00:19:11.821 "data_offset": 2048, 00:19:11.821 "data_size": 63488 00:19:11.821 }, 00:19:11.821 { 00:19:11.821 "name": "BaseBdev2", 00:19:11.821 "uuid": "deca87e1-aa0d-509b-b6a1-9cf67d35e39e", 00:19:11.821 "is_configured": true, 00:19:11.822 "data_offset": 2048, 00:19:11.822 "data_size": 63488 00:19:11.822 }, 00:19:11.822 { 00:19:11.822 "name": "BaseBdev3", 00:19:11.822 "uuid": "0d565e59-0d1f-589e-89e8-e02b82ef658d", 00:19:11.822 "is_configured": true, 00:19:11.822 "data_offset": 2048, 00:19:11.822 "data_size": 63488 00:19:11.822 }, 00:19:11.822 { 00:19:11.822 "name": "BaseBdev4", 00:19:11.822 "uuid": "def5fc4a-78ea-51a1-80d1-481791164be7", 00:19:11.822 "is_configured": true, 00:19:11.822 "data_offset": 2048, 00:19:11.822 "data_size": 63488 00:19:11.822 } 00:19:11.822 ] 00:19:11.822 }' 00:19:11.822 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:11.822 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:11.822 14:18:48 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:11.822 14:18:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:11.822 14:18:49 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:12.756 14:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:12.756 14:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:12.757 14:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:12.757 14:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:12.757 14:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:12.757 14:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.015 14:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.015 14:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.015 14:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.015 14:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:13.015 14:18:50 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.015 14:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:13.015 "name": "raid_bdev1", 00:19:13.015 "uuid": "bd428490-0891-4f3f-8471-c0b44331e3ca", 00:19:13.015 "strip_size_kb": 64, 00:19:13.015 "state": "online", 00:19:13.015 "raid_level": "raid5f", 00:19:13.015 "superblock": true, 00:19:13.015 "num_base_bdevs": 4, 00:19:13.015 "num_base_bdevs_discovered": 4, 00:19:13.015 "num_base_bdevs_operational": 4, 00:19:13.015 "process": { 00:19:13.015 "type": "rebuild", 00:19:13.015 "target": "spare", 00:19:13.015 "progress": { 00:19:13.015 "blocks": 44160, 00:19:13.015 "percent": 23 00:19:13.015 } 00:19:13.015 }, 00:19:13.015 "base_bdevs_list": [ 00:19:13.015 { 00:19:13.015 "name": "spare", 00:19:13.015 "uuid": "586f0115-5976-5082-9613-e10c09def55a", 00:19:13.015 "is_configured": true, 00:19:13.015 "data_offset": 2048, 00:19:13.015 "data_size": 63488 00:19:13.015 }, 00:19:13.015 { 00:19:13.015 "name": "BaseBdev2", 00:19:13.015 "uuid": "deca87e1-aa0d-509b-b6a1-9cf67d35e39e", 00:19:13.015 "is_configured": true, 00:19:13.015 "data_offset": 2048, 00:19:13.015 "data_size": 63488 00:19:13.015 }, 00:19:13.015 { 00:19:13.015 "name": "BaseBdev3", 00:19:13.015 "uuid": "0d565e59-0d1f-589e-89e8-e02b82ef658d", 00:19:13.015 "is_configured": true, 00:19:13.015 "data_offset": 2048, 00:19:13.015 "data_size": 63488 00:19:13.015 }, 00:19:13.015 { 00:19:13.015 "name": "BaseBdev4", 00:19:13.015 "uuid": "def5fc4a-78ea-51a1-80d1-481791164be7", 00:19:13.015 "is_configured": true, 00:19:13.015 "data_offset": 2048, 00:19:13.015 "data_size": 63488 00:19:13.015 } 00:19:13.015 ] 00:19:13.015 }' 00:19:13.015 14:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:13.015 14:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:13.015 14:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:13.015 14:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:13.015 14:18:50 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:13.952 14:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:13.952 14:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:13.952 14:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:13.952 14:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:13.952 14:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:13.952 14:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:13.952 14:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:13.952 14:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:13.952 14:18:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.952 14:18:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:14.211 14:18:51 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.211 14:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:14.211 "name": "raid_bdev1", 00:19:14.211 "uuid": "bd428490-0891-4f3f-8471-c0b44331e3ca", 00:19:14.211 "strip_size_kb": 64, 00:19:14.211 "state": "online", 00:19:14.211 "raid_level": "raid5f", 00:19:14.211 "superblock": true, 00:19:14.211 "num_base_bdevs": 4, 00:19:14.211 "num_base_bdevs_discovered": 4, 00:19:14.211 "num_base_bdevs_operational": 4, 00:19:14.211 "process": { 00:19:14.211 "type": "rebuild", 00:19:14.211 "target": "spare", 00:19:14.211 "progress": { 00:19:14.211 "blocks": 65280, 00:19:14.211 "percent": 34 00:19:14.211 } 00:19:14.211 }, 00:19:14.211 "base_bdevs_list": [ 00:19:14.211 { 00:19:14.211 "name": "spare", 00:19:14.211 "uuid": "586f0115-5976-5082-9613-e10c09def55a", 00:19:14.211 "is_configured": true, 00:19:14.211 "data_offset": 2048, 00:19:14.211 "data_size": 63488 00:19:14.211 }, 00:19:14.211 { 00:19:14.211 "name": "BaseBdev2", 00:19:14.211 "uuid": "deca87e1-aa0d-509b-b6a1-9cf67d35e39e", 00:19:14.211 "is_configured": true, 00:19:14.211 "data_offset": 2048, 00:19:14.211 "data_size": 63488 00:19:14.211 }, 00:19:14.211 { 00:19:14.211 "name": "BaseBdev3", 00:19:14.211 "uuid": "0d565e59-0d1f-589e-89e8-e02b82ef658d", 00:19:14.211 "is_configured": true, 00:19:14.211 "data_offset": 2048, 00:19:14.211 "data_size": 63488 00:19:14.211 }, 00:19:14.211 { 00:19:14.211 "name": "BaseBdev4", 00:19:14.211 "uuid": "def5fc4a-78ea-51a1-80d1-481791164be7", 00:19:14.211 "is_configured": true, 00:19:14.211 "data_offset": 2048, 00:19:14.211 "data_size": 63488 00:19:14.211 } 00:19:14.211 ] 00:19:14.211 }' 00:19:14.211 14:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:14.211 14:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:14.211 14:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:14.211 14:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:14.211 14:18:51 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:15.148 14:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:15.148 14:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:15.148 14:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:15.148 14:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:15.148 14:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:15.148 14:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:15.148 14:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:15.148 14:18:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.148 14:18:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:15.148 14:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:15.148 14:18:52 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.408 14:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:15.408 "name": "raid_bdev1", 00:19:15.408 "uuid": "bd428490-0891-4f3f-8471-c0b44331e3ca", 00:19:15.408 "strip_size_kb": 64, 00:19:15.408 "state": "online", 00:19:15.408 "raid_level": "raid5f", 00:19:15.408 "superblock": true, 00:19:15.408 "num_base_bdevs": 4, 00:19:15.408 "num_base_bdevs_discovered": 4, 00:19:15.408 "num_base_bdevs_operational": 4, 00:19:15.408 "process": { 00:19:15.408 "type": "rebuild", 00:19:15.408 "target": "spare", 00:19:15.408 "progress": { 00:19:15.408 "blocks": 88320, 00:19:15.408 "percent": 46 00:19:15.408 } 00:19:15.408 }, 00:19:15.408 "base_bdevs_list": [ 00:19:15.408 { 00:19:15.408 "name": "spare", 00:19:15.408 "uuid": "586f0115-5976-5082-9613-e10c09def55a", 00:19:15.408 "is_configured": true, 00:19:15.408 "data_offset": 2048, 00:19:15.408 "data_size": 63488 00:19:15.408 }, 00:19:15.408 { 00:19:15.408 "name": "BaseBdev2", 00:19:15.408 "uuid": "deca87e1-aa0d-509b-b6a1-9cf67d35e39e", 00:19:15.408 "is_configured": true, 00:19:15.408 "data_offset": 2048, 00:19:15.408 "data_size": 63488 00:19:15.408 }, 00:19:15.408 { 00:19:15.408 "name": "BaseBdev3", 00:19:15.408 "uuid": "0d565e59-0d1f-589e-89e8-e02b82ef658d", 00:19:15.408 "is_configured": true, 00:19:15.408 "data_offset": 2048, 00:19:15.408 "data_size": 63488 00:19:15.408 }, 00:19:15.408 { 00:19:15.408 "name": "BaseBdev4", 00:19:15.408 "uuid": "def5fc4a-78ea-51a1-80d1-481791164be7", 00:19:15.408 "is_configured": true, 00:19:15.408 "data_offset": 2048, 00:19:15.408 "data_size": 63488 00:19:15.408 } 00:19:15.408 ] 00:19:15.408 }' 00:19:15.408 14:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:15.408 14:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:15.408 14:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:15.408 14:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:15.408 14:18:52 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:16.346 14:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:16.346 14:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:16.346 14:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:16.346 14:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:16.346 14:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:16.346 14:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:16.346 14:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:16.346 14:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:16.346 14:18:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.346 14:18:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:16.346 14:18:53 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.346 14:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:16.346 "name": "raid_bdev1", 00:19:16.346 "uuid": "bd428490-0891-4f3f-8471-c0b44331e3ca", 00:19:16.346 "strip_size_kb": 64, 00:19:16.346 "state": "online", 00:19:16.346 "raid_level": "raid5f", 00:19:16.346 "superblock": true, 00:19:16.346 "num_base_bdevs": 4, 00:19:16.346 "num_base_bdevs_discovered": 4, 00:19:16.346 "num_base_bdevs_operational": 4, 00:19:16.346 "process": { 00:19:16.346 "type": "rebuild", 00:19:16.346 "target": "spare", 00:19:16.346 "progress": { 00:19:16.346 "blocks": 111360, 00:19:16.346 "percent": 58 00:19:16.346 } 00:19:16.346 }, 00:19:16.346 "base_bdevs_list": [ 00:19:16.346 { 00:19:16.346 "name": "spare", 00:19:16.346 "uuid": "586f0115-5976-5082-9613-e10c09def55a", 00:19:16.346 "is_configured": true, 00:19:16.346 "data_offset": 2048, 00:19:16.346 "data_size": 63488 00:19:16.346 }, 00:19:16.346 { 00:19:16.346 "name": "BaseBdev2", 00:19:16.346 "uuid": "deca87e1-aa0d-509b-b6a1-9cf67d35e39e", 00:19:16.346 "is_configured": true, 00:19:16.346 "data_offset": 2048, 00:19:16.346 "data_size": 63488 00:19:16.346 }, 00:19:16.346 { 00:19:16.346 "name": "BaseBdev3", 00:19:16.346 "uuid": "0d565e59-0d1f-589e-89e8-e02b82ef658d", 00:19:16.346 "is_configured": true, 00:19:16.346 "data_offset": 2048, 00:19:16.346 "data_size": 63488 00:19:16.346 }, 00:19:16.346 { 00:19:16.346 "name": "BaseBdev4", 00:19:16.346 "uuid": "def5fc4a-78ea-51a1-80d1-481791164be7", 00:19:16.346 "is_configured": true, 00:19:16.346 "data_offset": 2048, 00:19:16.346 "data_size": 63488 00:19:16.346 } 00:19:16.346 ] 00:19:16.346 }' 00:19:16.346 14:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:16.605 14:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:16.605 14:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:16.605 14:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:16.605 14:18:53 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:17.542 14:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:17.542 14:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:17.542 14:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:17.542 14:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:17.542 14:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:17.542 14:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:17.542 14:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:17.542 14:18:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.542 14:18:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:17.542 14:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:17.542 14:18:54 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.542 14:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:17.542 "name": "raid_bdev1", 00:19:17.542 "uuid": "bd428490-0891-4f3f-8471-c0b44331e3ca", 00:19:17.542 "strip_size_kb": 64, 00:19:17.542 "state": "online", 00:19:17.542 "raid_level": "raid5f", 00:19:17.542 "superblock": true, 00:19:17.542 "num_base_bdevs": 4, 00:19:17.542 "num_base_bdevs_discovered": 4, 00:19:17.542 "num_base_bdevs_operational": 4, 00:19:17.542 "process": { 00:19:17.542 "type": "rebuild", 00:19:17.542 "target": "spare", 00:19:17.543 "progress": { 00:19:17.543 "blocks": 132480, 00:19:17.543 "percent": 69 00:19:17.543 } 00:19:17.543 }, 00:19:17.543 "base_bdevs_list": [ 00:19:17.543 { 00:19:17.543 "name": "spare", 00:19:17.543 "uuid": "586f0115-5976-5082-9613-e10c09def55a", 00:19:17.543 "is_configured": true, 00:19:17.543 "data_offset": 2048, 00:19:17.543 "data_size": 63488 00:19:17.543 }, 00:19:17.543 { 00:19:17.543 "name": "BaseBdev2", 00:19:17.543 "uuid": "deca87e1-aa0d-509b-b6a1-9cf67d35e39e", 00:19:17.543 "is_configured": true, 00:19:17.543 "data_offset": 2048, 00:19:17.543 "data_size": 63488 00:19:17.543 }, 00:19:17.543 { 00:19:17.543 "name": "BaseBdev3", 00:19:17.543 "uuid": "0d565e59-0d1f-589e-89e8-e02b82ef658d", 00:19:17.543 "is_configured": true, 00:19:17.543 "data_offset": 2048, 00:19:17.543 "data_size": 63488 00:19:17.543 }, 00:19:17.543 { 00:19:17.543 "name": "BaseBdev4", 00:19:17.543 "uuid": "def5fc4a-78ea-51a1-80d1-481791164be7", 00:19:17.543 "is_configured": true, 00:19:17.543 "data_offset": 2048, 00:19:17.543 "data_size": 63488 00:19:17.543 } 00:19:17.543 ] 00:19:17.543 }' 00:19:17.543 14:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:17.802 14:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:17.802 14:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:17.802 14:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:17.802 14:18:54 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:18.743 14:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:18.743 14:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:18.743 14:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:18.743 14:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:18.743 14:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:18.743 14:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:18.743 14:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:18.743 14:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:18.743 14:18:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.743 14:18:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:18.743 14:18:55 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.743 14:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:18.743 "name": "raid_bdev1", 00:19:18.743 "uuid": "bd428490-0891-4f3f-8471-c0b44331e3ca", 00:19:18.743 "strip_size_kb": 64, 00:19:18.743 "state": "online", 00:19:18.743 "raid_level": "raid5f", 00:19:18.743 "superblock": true, 00:19:18.743 "num_base_bdevs": 4, 00:19:18.743 "num_base_bdevs_discovered": 4, 00:19:18.743 "num_base_bdevs_operational": 4, 00:19:18.743 "process": { 00:19:18.743 "type": "rebuild", 00:19:18.743 "target": "spare", 00:19:18.743 "progress": { 00:19:18.743 "blocks": 155520, 00:19:18.743 "percent": 81 00:19:18.743 } 00:19:18.743 }, 00:19:18.743 "base_bdevs_list": [ 00:19:18.743 { 00:19:18.743 "name": "spare", 00:19:18.743 "uuid": "586f0115-5976-5082-9613-e10c09def55a", 00:19:18.743 "is_configured": true, 00:19:18.743 "data_offset": 2048, 00:19:18.743 "data_size": 63488 00:19:18.743 }, 00:19:18.743 { 00:19:18.743 "name": "BaseBdev2", 00:19:18.743 "uuid": "deca87e1-aa0d-509b-b6a1-9cf67d35e39e", 00:19:18.743 "is_configured": true, 00:19:18.743 "data_offset": 2048, 00:19:18.743 "data_size": 63488 00:19:18.743 }, 00:19:18.743 { 00:19:18.743 "name": "BaseBdev3", 00:19:18.743 "uuid": "0d565e59-0d1f-589e-89e8-e02b82ef658d", 00:19:18.743 "is_configured": true, 00:19:18.743 "data_offset": 2048, 00:19:18.743 "data_size": 63488 00:19:18.743 }, 00:19:18.743 { 00:19:18.743 "name": "BaseBdev4", 00:19:18.743 "uuid": "def5fc4a-78ea-51a1-80d1-481791164be7", 00:19:18.743 "is_configured": true, 00:19:18.743 "data_offset": 2048, 00:19:18.743 "data_size": 63488 00:19:18.743 } 00:19:18.743 ] 00:19:18.743 }' 00:19:18.743 14:18:55 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:19.002 14:18:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:19.002 14:18:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:19.002 14:18:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:19.002 14:18:56 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:19.939 14:18:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:19.939 14:18:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:19.939 14:18:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:19.939 14:18:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:19.939 14:18:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:19.939 14:18:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:19.939 14:18:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:19.939 14:18:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.939 14:18:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:19.939 14:18:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:19.939 14:18:57 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.939 14:18:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:19.939 "name": "raid_bdev1", 00:19:19.939 "uuid": "bd428490-0891-4f3f-8471-c0b44331e3ca", 00:19:19.939 "strip_size_kb": 64, 00:19:19.939 "state": "online", 00:19:19.939 "raid_level": "raid5f", 00:19:19.939 "superblock": true, 00:19:19.939 "num_base_bdevs": 4, 00:19:19.939 "num_base_bdevs_discovered": 4, 00:19:19.939 "num_base_bdevs_operational": 4, 00:19:19.939 "process": { 00:19:19.939 "type": "rebuild", 00:19:19.939 "target": "spare", 00:19:19.939 "progress": { 00:19:19.939 "blocks": 176640, 00:19:19.939 "percent": 92 00:19:19.939 } 00:19:19.939 }, 00:19:19.939 "base_bdevs_list": [ 00:19:19.939 { 00:19:19.939 "name": "spare", 00:19:19.940 "uuid": "586f0115-5976-5082-9613-e10c09def55a", 00:19:19.940 "is_configured": true, 00:19:19.940 "data_offset": 2048, 00:19:19.940 "data_size": 63488 00:19:19.940 }, 00:19:19.940 { 00:19:19.940 "name": "BaseBdev2", 00:19:19.940 "uuid": "deca87e1-aa0d-509b-b6a1-9cf67d35e39e", 00:19:19.940 "is_configured": true, 00:19:19.940 "data_offset": 2048, 00:19:19.940 "data_size": 63488 00:19:19.940 }, 00:19:19.940 { 00:19:19.940 "name": "BaseBdev3", 00:19:19.940 "uuid": "0d565e59-0d1f-589e-89e8-e02b82ef658d", 00:19:19.940 "is_configured": true, 00:19:19.940 "data_offset": 2048, 00:19:19.940 "data_size": 63488 00:19:19.940 }, 00:19:19.940 { 00:19:19.940 "name": "BaseBdev4", 00:19:19.940 "uuid": "def5fc4a-78ea-51a1-80d1-481791164be7", 00:19:19.940 "is_configured": true, 00:19:19.940 "data_offset": 2048, 00:19:19.940 "data_size": 63488 00:19:19.940 } 00:19:19.940 ] 00:19:19.940 }' 00:19:19.940 14:18:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:19.940 14:18:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:19.940 14:18:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:20.198 14:18:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:20.198 14:18:57 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:20.766 [2024-11-27 14:18:57.808395] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:20.766 [2024-11-27 14:18:57.808496] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:20.766 [2024-11-27 14:18:57.808715] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:21.025 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:21.025 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:21.025 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:21.025 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:21.025 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:21.025 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:21.025 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.025 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.025 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.025 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.025 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.284 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:21.284 "name": "raid_bdev1", 00:19:21.284 "uuid": "bd428490-0891-4f3f-8471-c0b44331e3ca", 00:19:21.284 "strip_size_kb": 64, 00:19:21.284 "state": "online", 00:19:21.284 "raid_level": "raid5f", 00:19:21.284 "superblock": true, 00:19:21.284 "num_base_bdevs": 4, 00:19:21.284 "num_base_bdevs_discovered": 4, 00:19:21.284 "num_base_bdevs_operational": 4, 00:19:21.284 "base_bdevs_list": [ 00:19:21.284 { 00:19:21.284 "name": "spare", 00:19:21.284 "uuid": "586f0115-5976-5082-9613-e10c09def55a", 00:19:21.284 "is_configured": true, 00:19:21.284 "data_offset": 2048, 00:19:21.284 "data_size": 63488 00:19:21.284 }, 00:19:21.284 { 00:19:21.284 "name": "BaseBdev2", 00:19:21.284 "uuid": "deca87e1-aa0d-509b-b6a1-9cf67d35e39e", 00:19:21.284 "is_configured": true, 00:19:21.284 "data_offset": 2048, 00:19:21.284 "data_size": 63488 00:19:21.284 }, 00:19:21.284 { 00:19:21.284 "name": "BaseBdev3", 00:19:21.284 "uuid": "0d565e59-0d1f-589e-89e8-e02b82ef658d", 00:19:21.284 "is_configured": true, 00:19:21.284 "data_offset": 2048, 00:19:21.284 "data_size": 63488 00:19:21.284 }, 00:19:21.284 { 00:19:21.284 "name": "BaseBdev4", 00:19:21.284 "uuid": "def5fc4a-78ea-51a1-80d1-481791164be7", 00:19:21.284 "is_configured": true, 00:19:21.284 "data_offset": 2048, 00:19:21.284 "data_size": 63488 00:19:21.284 } 00:19:21.284 ] 00:19:21.284 }' 00:19:21.284 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:21.284 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:21.284 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:21.284 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:21.284 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@709 -- # break 00:19:21.284 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:21.284 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:21.284 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:21.284 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:21.284 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:21.284 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.284 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.284 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.284 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.284 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.284 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:21.284 "name": "raid_bdev1", 00:19:21.284 "uuid": "bd428490-0891-4f3f-8471-c0b44331e3ca", 00:19:21.284 "strip_size_kb": 64, 00:19:21.284 "state": "online", 00:19:21.284 "raid_level": "raid5f", 00:19:21.284 "superblock": true, 00:19:21.284 "num_base_bdevs": 4, 00:19:21.284 "num_base_bdevs_discovered": 4, 00:19:21.284 "num_base_bdevs_operational": 4, 00:19:21.284 "base_bdevs_list": [ 00:19:21.284 { 00:19:21.284 "name": "spare", 00:19:21.284 "uuid": "586f0115-5976-5082-9613-e10c09def55a", 00:19:21.284 "is_configured": true, 00:19:21.284 "data_offset": 2048, 00:19:21.284 "data_size": 63488 00:19:21.284 }, 00:19:21.284 { 00:19:21.284 "name": "BaseBdev2", 00:19:21.284 "uuid": "deca87e1-aa0d-509b-b6a1-9cf67d35e39e", 00:19:21.284 "is_configured": true, 00:19:21.284 "data_offset": 2048, 00:19:21.284 "data_size": 63488 00:19:21.284 }, 00:19:21.284 { 00:19:21.284 "name": "BaseBdev3", 00:19:21.284 "uuid": "0d565e59-0d1f-589e-89e8-e02b82ef658d", 00:19:21.284 "is_configured": true, 00:19:21.284 "data_offset": 2048, 00:19:21.284 "data_size": 63488 00:19:21.284 }, 00:19:21.284 { 00:19:21.284 "name": "BaseBdev4", 00:19:21.284 "uuid": "def5fc4a-78ea-51a1-80d1-481791164be7", 00:19:21.284 "is_configured": true, 00:19:21.284 "data_offset": 2048, 00:19:21.284 "data_size": 63488 00:19:21.284 } 00:19:21.284 ] 00:19:21.284 }' 00:19:21.284 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:21.284 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:21.284 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:21.543 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:21.543 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:21.543 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:21.543 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:21.543 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:21.543 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:21.543 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:21.543 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:21.543 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:21.543 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:21.543 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:21.543 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:21.543 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.543 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:21.543 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:21.543 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.543 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:21.543 "name": "raid_bdev1", 00:19:21.543 "uuid": "bd428490-0891-4f3f-8471-c0b44331e3ca", 00:19:21.543 "strip_size_kb": 64, 00:19:21.543 "state": "online", 00:19:21.543 "raid_level": "raid5f", 00:19:21.543 "superblock": true, 00:19:21.543 "num_base_bdevs": 4, 00:19:21.543 "num_base_bdevs_discovered": 4, 00:19:21.543 "num_base_bdevs_operational": 4, 00:19:21.543 "base_bdevs_list": [ 00:19:21.543 { 00:19:21.543 "name": "spare", 00:19:21.543 "uuid": "586f0115-5976-5082-9613-e10c09def55a", 00:19:21.543 "is_configured": true, 00:19:21.543 "data_offset": 2048, 00:19:21.543 "data_size": 63488 00:19:21.543 }, 00:19:21.543 { 00:19:21.543 "name": "BaseBdev2", 00:19:21.543 "uuid": "deca87e1-aa0d-509b-b6a1-9cf67d35e39e", 00:19:21.543 "is_configured": true, 00:19:21.543 "data_offset": 2048, 00:19:21.543 "data_size": 63488 00:19:21.543 }, 00:19:21.543 { 00:19:21.543 "name": "BaseBdev3", 00:19:21.543 "uuid": "0d565e59-0d1f-589e-89e8-e02b82ef658d", 00:19:21.543 "is_configured": true, 00:19:21.543 "data_offset": 2048, 00:19:21.544 "data_size": 63488 00:19:21.544 }, 00:19:21.544 { 00:19:21.544 "name": "BaseBdev4", 00:19:21.544 "uuid": "def5fc4a-78ea-51a1-80d1-481791164be7", 00:19:21.544 "is_configured": true, 00:19:21.544 "data_offset": 2048, 00:19:21.544 "data_size": 63488 00:19:21.544 } 00:19:21.544 ] 00:19:21.544 }' 00:19:21.544 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:21.544 14:18:58 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.114 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:22.115 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.115 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.115 [2024-11-27 14:18:59.138729] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:22.115 [2024-11-27 14:18:59.138808] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:22.115 [2024-11-27 14:18:59.138907] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:22.115 [2024-11-27 14:18:59.139038] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:22.115 [2024-11-27 14:18:59.139068] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:22.115 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.115 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:22.115 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # jq length 00:19:22.115 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.115 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:22.115 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.115 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:22.115 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:22.115 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:22.115 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:22.115 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:22.115 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:22.115 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:22.115 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:22.115 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:22.115 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@12 -- # local i 00:19:22.115 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:22.115 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:22.115 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:22.379 /dev/nbd0 00:19:22.379 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:22.379 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:22.379 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:22.379 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:22.379 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:22.379 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:22.379 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:22.379 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:22.379 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:22.379 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:22.379 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:22.379 1+0 records in 00:19:22.379 1+0 records out 00:19:22.379 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00035697 s, 11.5 MB/s 00:19:22.379 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:22.379 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:22.379 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:22.379 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:22.379 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:22.379 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:22.380 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:22.380 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:22.638 /dev/nbd1 00:19:22.638 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:22.638 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:22.638 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:22.638 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@873 -- # local i 00:19:22.638 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:22.638 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:22.638 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:22.638 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@877 -- # break 00:19:22.638 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:22.638 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:22.638 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:22.638 1+0 records in 00:19:22.638 1+0 records out 00:19:22.638 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000360517 s, 11.4 MB/s 00:19:22.638 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:22.638 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@890 -- # size=4096 00:19:22.638 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:22.638 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:22.638 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@893 -- # return 0 00:19:22.638 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:22.638 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:22.638 14:18:59 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:22.897 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:22.897 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:22.897 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:22.897 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:22.897 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@51 -- # local i 00:19:22.897 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:22.897 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:23.156 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:23.156 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:23.156 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:23.156 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:23.156 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:23.156 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:23.156 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:23.156 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:23.156 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:23.156 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:23.416 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:23.416 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:23.416 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:23.416 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:23.416 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:23.416 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:23.416 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@41 -- # break 00:19:23.416 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/nbd_common.sh@45 -- # return 0 00:19:23.416 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:23.416 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:23.416 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.416 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.416 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.416 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:23.416 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.416 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.416 [2024-11-27 14:19:00.642481] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:23.416 [2024-11-27 14:19:00.642553] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:23.416 [2024-11-27 14:19:00.642586] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ba80 00:19:23.416 [2024-11-27 14:19:00.642600] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:23.417 [2024-11-27 14:19:00.645731] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:23.417 [2024-11-27 14:19:00.645821] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:23.417 [2024-11-27 14:19:00.645944] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:23.417 [2024-11-27 14:19:00.646013] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:23.417 [2024-11-27 14:19:00.646183] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:23.417 [2024-11-27 14:19:00.646313] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:19:23.417 [2024-11-27 14:19:00.646436] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:19:23.417 spare 00:19:23.417 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.417 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:23.417 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.417 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.676 [2024-11-27 14:19:00.746617] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:23.676 [2024-11-27 14:19:00.746686] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:19:23.676 [2024-11-27 14:19:00.747207] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000491d0 00:19:23.676 [2024-11-27 14:19:00.753208] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:23.676 [2024-11-27 14:19:00.753232] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:23.676 [2024-11-27 14:19:00.753476] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:23.676 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.676 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 4 00:19:23.676 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:23.676 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:23.676 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:23.676 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:23.676 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=4 00:19:23.676 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:23.676 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:23.676 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:23.676 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:23.676 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:23.676 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.676 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:23.676 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:23.676 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.676 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:23.676 "name": "raid_bdev1", 00:19:23.676 "uuid": "bd428490-0891-4f3f-8471-c0b44331e3ca", 00:19:23.676 "strip_size_kb": 64, 00:19:23.676 "state": "online", 00:19:23.676 "raid_level": "raid5f", 00:19:23.676 "superblock": true, 00:19:23.676 "num_base_bdevs": 4, 00:19:23.676 "num_base_bdevs_discovered": 4, 00:19:23.676 "num_base_bdevs_operational": 4, 00:19:23.676 "base_bdevs_list": [ 00:19:23.676 { 00:19:23.676 "name": "spare", 00:19:23.676 "uuid": "586f0115-5976-5082-9613-e10c09def55a", 00:19:23.676 "is_configured": true, 00:19:23.676 "data_offset": 2048, 00:19:23.676 "data_size": 63488 00:19:23.676 }, 00:19:23.676 { 00:19:23.676 "name": "BaseBdev2", 00:19:23.676 "uuid": "deca87e1-aa0d-509b-b6a1-9cf67d35e39e", 00:19:23.676 "is_configured": true, 00:19:23.676 "data_offset": 2048, 00:19:23.676 "data_size": 63488 00:19:23.676 }, 00:19:23.676 { 00:19:23.676 "name": "BaseBdev3", 00:19:23.676 "uuid": "0d565e59-0d1f-589e-89e8-e02b82ef658d", 00:19:23.676 "is_configured": true, 00:19:23.676 "data_offset": 2048, 00:19:23.676 "data_size": 63488 00:19:23.676 }, 00:19:23.676 { 00:19:23.676 "name": "BaseBdev4", 00:19:23.676 "uuid": "def5fc4a-78ea-51a1-80d1-481791164be7", 00:19:23.676 "is_configured": true, 00:19:23.676 "data_offset": 2048, 00:19:23.676 "data_size": 63488 00:19:23.676 } 00:19:23.676 ] 00:19:23.676 }' 00:19:23.676 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:23.676 14:19:00 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.244 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:24.244 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:24.244 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:24.244 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:24.244 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:24.244 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.244 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.244 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.244 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.244 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.244 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:24.244 "name": "raid_bdev1", 00:19:24.244 "uuid": "bd428490-0891-4f3f-8471-c0b44331e3ca", 00:19:24.244 "strip_size_kb": 64, 00:19:24.244 "state": "online", 00:19:24.244 "raid_level": "raid5f", 00:19:24.244 "superblock": true, 00:19:24.244 "num_base_bdevs": 4, 00:19:24.244 "num_base_bdevs_discovered": 4, 00:19:24.244 "num_base_bdevs_operational": 4, 00:19:24.244 "base_bdevs_list": [ 00:19:24.244 { 00:19:24.244 "name": "spare", 00:19:24.244 "uuid": "586f0115-5976-5082-9613-e10c09def55a", 00:19:24.244 "is_configured": true, 00:19:24.244 "data_offset": 2048, 00:19:24.244 "data_size": 63488 00:19:24.244 }, 00:19:24.244 { 00:19:24.244 "name": "BaseBdev2", 00:19:24.244 "uuid": "deca87e1-aa0d-509b-b6a1-9cf67d35e39e", 00:19:24.244 "is_configured": true, 00:19:24.244 "data_offset": 2048, 00:19:24.244 "data_size": 63488 00:19:24.244 }, 00:19:24.244 { 00:19:24.244 "name": "BaseBdev3", 00:19:24.244 "uuid": "0d565e59-0d1f-589e-89e8-e02b82ef658d", 00:19:24.244 "is_configured": true, 00:19:24.244 "data_offset": 2048, 00:19:24.244 "data_size": 63488 00:19:24.244 }, 00:19:24.244 { 00:19:24.244 "name": "BaseBdev4", 00:19:24.244 "uuid": "def5fc4a-78ea-51a1-80d1-481791164be7", 00:19:24.244 "is_configured": true, 00:19:24.244 "data_offset": 2048, 00:19:24.244 "data_size": 63488 00:19:24.244 } 00:19:24.244 ] 00:19:24.244 }' 00:19:24.244 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:24.244 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:24.244 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:24.244 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:24.244 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:24.244 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.244 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.244 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.244 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.244 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:24.244 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:24.244 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.244 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.504 [2024-11-27 14:19:01.520895] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:24.504 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.504 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:24.504 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:24.504 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:24.504 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:24.504 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:24.504 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:24.504 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:24.504 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:24.504 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:24.504 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:24.504 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:24.504 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:24.504 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.504 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:24.504 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.504 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:24.504 "name": "raid_bdev1", 00:19:24.504 "uuid": "bd428490-0891-4f3f-8471-c0b44331e3ca", 00:19:24.504 "strip_size_kb": 64, 00:19:24.504 "state": "online", 00:19:24.504 "raid_level": "raid5f", 00:19:24.504 "superblock": true, 00:19:24.504 "num_base_bdevs": 4, 00:19:24.504 "num_base_bdevs_discovered": 3, 00:19:24.504 "num_base_bdevs_operational": 3, 00:19:24.504 "base_bdevs_list": [ 00:19:24.504 { 00:19:24.504 "name": null, 00:19:24.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:24.504 "is_configured": false, 00:19:24.504 "data_offset": 0, 00:19:24.504 "data_size": 63488 00:19:24.504 }, 00:19:24.504 { 00:19:24.504 "name": "BaseBdev2", 00:19:24.504 "uuid": "deca87e1-aa0d-509b-b6a1-9cf67d35e39e", 00:19:24.504 "is_configured": true, 00:19:24.504 "data_offset": 2048, 00:19:24.504 "data_size": 63488 00:19:24.504 }, 00:19:24.504 { 00:19:24.504 "name": "BaseBdev3", 00:19:24.504 "uuid": "0d565e59-0d1f-589e-89e8-e02b82ef658d", 00:19:24.504 "is_configured": true, 00:19:24.504 "data_offset": 2048, 00:19:24.504 "data_size": 63488 00:19:24.504 }, 00:19:24.504 { 00:19:24.504 "name": "BaseBdev4", 00:19:24.504 "uuid": "def5fc4a-78ea-51a1-80d1-481791164be7", 00:19:24.504 "is_configured": true, 00:19:24.504 "data_offset": 2048, 00:19:24.504 "data_size": 63488 00:19:24.504 } 00:19:24.504 ] 00:19:24.504 }' 00:19:24.504 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:24.504 14:19:01 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.072 14:19:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:25.072 14:19:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.072 14:19:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:25.072 [2024-11-27 14:19:02.065078] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:25.072 [2024-11-27 14:19:02.065334] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:25.072 [2024-11-27 14:19:02.065361] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:25.072 [2024-11-27 14:19:02.065420] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:25.072 [2024-11-27 14:19:02.078577] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0000492a0 00:19:25.072 14:19:02 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.072 14:19:02 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:25.072 [2024-11-27 14:19:02.087358] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:26.010 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:26.010 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:26.010 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:26.010 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:26.010 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:26.010 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.010 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.010 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.010 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.010 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.010 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:26.010 "name": "raid_bdev1", 00:19:26.010 "uuid": "bd428490-0891-4f3f-8471-c0b44331e3ca", 00:19:26.010 "strip_size_kb": 64, 00:19:26.010 "state": "online", 00:19:26.010 "raid_level": "raid5f", 00:19:26.010 "superblock": true, 00:19:26.010 "num_base_bdevs": 4, 00:19:26.010 "num_base_bdevs_discovered": 4, 00:19:26.010 "num_base_bdevs_operational": 4, 00:19:26.010 "process": { 00:19:26.010 "type": "rebuild", 00:19:26.010 "target": "spare", 00:19:26.010 "progress": { 00:19:26.010 "blocks": 17280, 00:19:26.010 "percent": 9 00:19:26.010 } 00:19:26.010 }, 00:19:26.010 "base_bdevs_list": [ 00:19:26.010 { 00:19:26.010 "name": "spare", 00:19:26.010 "uuid": "586f0115-5976-5082-9613-e10c09def55a", 00:19:26.010 "is_configured": true, 00:19:26.010 "data_offset": 2048, 00:19:26.010 "data_size": 63488 00:19:26.010 }, 00:19:26.010 { 00:19:26.010 "name": "BaseBdev2", 00:19:26.010 "uuid": "deca87e1-aa0d-509b-b6a1-9cf67d35e39e", 00:19:26.010 "is_configured": true, 00:19:26.010 "data_offset": 2048, 00:19:26.010 "data_size": 63488 00:19:26.010 }, 00:19:26.010 { 00:19:26.010 "name": "BaseBdev3", 00:19:26.010 "uuid": "0d565e59-0d1f-589e-89e8-e02b82ef658d", 00:19:26.010 "is_configured": true, 00:19:26.010 "data_offset": 2048, 00:19:26.010 "data_size": 63488 00:19:26.010 }, 00:19:26.010 { 00:19:26.010 "name": "BaseBdev4", 00:19:26.010 "uuid": "def5fc4a-78ea-51a1-80d1-481791164be7", 00:19:26.010 "is_configured": true, 00:19:26.010 "data_offset": 2048, 00:19:26.010 "data_size": 63488 00:19:26.010 } 00:19:26.010 ] 00:19:26.010 }' 00:19:26.010 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:26.010 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:26.010 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:26.010 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:26.010 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:19:26.010 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.010 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.010 [2024-11-27 14:19:03.248743] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:26.269 [2024-11-27 14:19:03.300576] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:26.269 [2024-11-27 14:19:03.300952] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:26.269 [2024-11-27 14:19:03.301242] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:26.269 [2024-11-27 14:19:03.301305] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:26.269 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.269 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:26.269 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:26.269 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:26.269 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:26.269 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:26.269 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:26.269 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:26.269 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:26.269 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:26.269 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:26.269 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:26.269 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:26.269 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.269 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.269 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.269 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:26.269 "name": "raid_bdev1", 00:19:26.269 "uuid": "bd428490-0891-4f3f-8471-c0b44331e3ca", 00:19:26.269 "strip_size_kb": 64, 00:19:26.269 "state": "online", 00:19:26.269 "raid_level": "raid5f", 00:19:26.269 "superblock": true, 00:19:26.269 "num_base_bdevs": 4, 00:19:26.269 "num_base_bdevs_discovered": 3, 00:19:26.269 "num_base_bdevs_operational": 3, 00:19:26.269 "base_bdevs_list": [ 00:19:26.269 { 00:19:26.269 "name": null, 00:19:26.269 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:26.269 "is_configured": false, 00:19:26.269 "data_offset": 0, 00:19:26.269 "data_size": 63488 00:19:26.269 }, 00:19:26.269 { 00:19:26.269 "name": "BaseBdev2", 00:19:26.269 "uuid": "deca87e1-aa0d-509b-b6a1-9cf67d35e39e", 00:19:26.269 "is_configured": true, 00:19:26.269 "data_offset": 2048, 00:19:26.269 "data_size": 63488 00:19:26.269 }, 00:19:26.269 { 00:19:26.269 "name": "BaseBdev3", 00:19:26.269 "uuid": "0d565e59-0d1f-589e-89e8-e02b82ef658d", 00:19:26.269 "is_configured": true, 00:19:26.269 "data_offset": 2048, 00:19:26.269 "data_size": 63488 00:19:26.269 }, 00:19:26.269 { 00:19:26.269 "name": "BaseBdev4", 00:19:26.269 "uuid": "def5fc4a-78ea-51a1-80d1-481791164be7", 00:19:26.269 "is_configured": true, 00:19:26.269 "data_offset": 2048, 00:19:26.269 "data_size": 63488 00:19:26.269 } 00:19:26.269 ] 00:19:26.269 }' 00:19:26.269 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:26.269 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.837 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:26.837 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.837 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:26.837 [2024-11-27 14:19:03.857079] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:26.837 [2024-11-27 14:19:03.857162] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:26.837 [2024-11-27 14:19:03.857212] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c380 00:19:26.837 [2024-11-27 14:19:03.857231] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:26.837 [2024-11-27 14:19:03.857893] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:26.837 [2024-11-27 14:19:03.857930] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:26.837 [2024-11-27 14:19:03.858047] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:26.837 [2024-11-27 14:19:03.858070] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:26.837 [2024-11-27 14:19:03.858083] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:26.837 [2024-11-27 14:19:03.858119] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:26.837 [2024-11-27 14:19:03.871028] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000049370 00:19:26.837 spare 00:19:26.837 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.837 14:19:03 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@764 -- # sleep 1 00:19:26.837 [2024-11-27 14:19:03.879718] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:27.814 14:19:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:27.814 14:19:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:27.814 14:19:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:27.814 14:19:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:27.814 14:19:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:27.814 14:19:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:27.814 14:19:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.814 14:19:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:27.814 14:19:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.814 14:19:04 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.814 14:19:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:27.814 "name": "raid_bdev1", 00:19:27.814 "uuid": "bd428490-0891-4f3f-8471-c0b44331e3ca", 00:19:27.814 "strip_size_kb": 64, 00:19:27.814 "state": "online", 00:19:27.814 "raid_level": "raid5f", 00:19:27.814 "superblock": true, 00:19:27.814 "num_base_bdevs": 4, 00:19:27.814 "num_base_bdevs_discovered": 4, 00:19:27.814 "num_base_bdevs_operational": 4, 00:19:27.814 "process": { 00:19:27.814 "type": "rebuild", 00:19:27.814 "target": "spare", 00:19:27.814 "progress": { 00:19:27.814 "blocks": 17280, 00:19:27.814 "percent": 9 00:19:27.814 } 00:19:27.814 }, 00:19:27.814 "base_bdevs_list": [ 00:19:27.814 { 00:19:27.814 "name": "spare", 00:19:27.814 "uuid": "586f0115-5976-5082-9613-e10c09def55a", 00:19:27.814 "is_configured": true, 00:19:27.814 "data_offset": 2048, 00:19:27.814 "data_size": 63488 00:19:27.814 }, 00:19:27.814 { 00:19:27.814 "name": "BaseBdev2", 00:19:27.814 "uuid": "deca87e1-aa0d-509b-b6a1-9cf67d35e39e", 00:19:27.814 "is_configured": true, 00:19:27.814 "data_offset": 2048, 00:19:27.814 "data_size": 63488 00:19:27.814 }, 00:19:27.814 { 00:19:27.814 "name": "BaseBdev3", 00:19:27.814 "uuid": "0d565e59-0d1f-589e-89e8-e02b82ef658d", 00:19:27.814 "is_configured": true, 00:19:27.814 "data_offset": 2048, 00:19:27.814 "data_size": 63488 00:19:27.814 }, 00:19:27.814 { 00:19:27.814 "name": "BaseBdev4", 00:19:27.814 "uuid": "def5fc4a-78ea-51a1-80d1-481791164be7", 00:19:27.814 "is_configured": true, 00:19:27.814 "data_offset": 2048, 00:19:27.814 "data_size": 63488 00:19:27.814 } 00:19:27.814 ] 00:19:27.814 }' 00:19:27.814 14:19:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:27.814 14:19:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:27.814 14:19:04 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:27.814 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:27.814 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:19:27.814 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.814 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:27.814 [2024-11-27 14:19:05.037442] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:28.073 [2024-11-27 14:19:05.092689] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:28.073 [2024-11-27 14:19:05.092819] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:28.073 [2024-11-27 14:19:05.092850] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:28.073 [2024-11-27 14:19:05.092861] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:28.073 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.073 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:28.073 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:28.073 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:28.073 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:28.073 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:28.074 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:28.074 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:28.074 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:28.074 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:28.074 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:28.074 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.074 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.074 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.074 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.074 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.074 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:28.074 "name": "raid_bdev1", 00:19:28.074 "uuid": "bd428490-0891-4f3f-8471-c0b44331e3ca", 00:19:28.074 "strip_size_kb": 64, 00:19:28.074 "state": "online", 00:19:28.074 "raid_level": "raid5f", 00:19:28.074 "superblock": true, 00:19:28.074 "num_base_bdevs": 4, 00:19:28.074 "num_base_bdevs_discovered": 3, 00:19:28.074 "num_base_bdevs_operational": 3, 00:19:28.074 "base_bdevs_list": [ 00:19:28.074 { 00:19:28.074 "name": null, 00:19:28.074 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.074 "is_configured": false, 00:19:28.074 "data_offset": 0, 00:19:28.074 "data_size": 63488 00:19:28.074 }, 00:19:28.074 { 00:19:28.074 "name": "BaseBdev2", 00:19:28.074 "uuid": "deca87e1-aa0d-509b-b6a1-9cf67d35e39e", 00:19:28.074 "is_configured": true, 00:19:28.074 "data_offset": 2048, 00:19:28.074 "data_size": 63488 00:19:28.074 }, 00:19:28.074 { 00:19:28.074 "name": "BaseBdev3", 00:19:28.074 "uuid": "0d565e59-0d1f-589e-89e8-e02b82ef658d", 00:19:28.074 "is_configured": true, 00:19:28.074 "data_offset": 2048, 00:19:28.074 "data_size": 63488 00:19:28.074 }, 00:19:28.074 { 00:19:28.074 "name": "BaseBdev4", 00:19:28.074 "uuid": "def5fc4a-78ea-51a1-80d1-481791164be7", 00:19:28.074 "is_configured": true, 00:19:28.074 "data_offset": 2048, 00:19:28.074 "data_size": 63488 00:19:28.074 } 00:19:28.074 ] 00:19:28.074 }' 00:19:28.074 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:28.074 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.641 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:28.641 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:28.641 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:28.641 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:28.641 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:28.641 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:28.641 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.641 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:28.642 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.642 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.642 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:28.642 "name": "raid_bdev1", 00:19:28.642 "uuid": "bd428490-0891-4f3f-8471-c0b44331e3ca", 00:19:28.642 "strip_size_kb": 64, 00:19:28.642 "state": "online", 00:19:28.642 "raid_level": "raid5f", 00:19:28.642 "superblock": true, 00:19:28.642 "num_base_bdevs": 4, 00:19:28.642 "num_base_bdevs_discovered": 3, 00:19:28.642 "num_base_bdevs_operational": 3, 00:19:28.642 "base_bdevs_list": [ 00:19:28.642 { 00:19:28.642 "name": null, 00:19:28.642 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:28.642 "is_configured": false, 00:19:28.642 "data_offset": 0, 00:19:28.642 "data_size": 63488 00:19:28.642 }, 00:19:28.642 { 00:19:28.642 "name": "BaseBdev2", 00:19:28.642 "uuid": "deca87e1-aa0d-509b-b6a1-9cf67d35e39e", 00:19:28.642 "is_configured": true, 00:19:28.642 "data_offset": 2048, 00:19:28.642 "data_size": 63488 00:19:28.642 }, 00:19:28.642 { 00:19:28.642 "name": "BaseBdev3", 00:19:28.642 "uuid": "0d565e59-0d1f-589e-89e8-e02b82ef658d", 00:19:28.642 "is_configured": true, 00:19:28.642 "data_offset": 2048, 00:19:28.642 "data_size": 63488 00:19:28.642 }, 00:19:28.642 { 00:19:28.642 "name": "BaseBdev4", 00:19:28.642 "uuid": "def5fc4a-78ea-51a1-80d1-481791164be7", 00:19:28.642 "is_configured": true, 00:19:28.642 "data_offset": 2048, 00:19:28.642 "data_size": 63488 00:19:28.642 } 00:19:28.642 ] 00:19:28.642 }' 00:19:28.642 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:28.642 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:28.642 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:28.642 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:28.642 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:19:28.642 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.642 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.642 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.642 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:28.642 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.642 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:28.642 [2024-11-27 14:19:05.856011] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:28.642 [2024-11-27 14:19:05.856116] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:28.642 [2024-11-27 14:19:05.856185] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000c980 00:19:28.642 [2024-11-27 14:19:05.856214] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:28.642 [2024-11-27 14:19:05.856851] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:28.642 [2024-11-27 14:19:05.856895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:28.642 [2024-11-27 14:19:05.857010] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:19:28.642 [2024-11-27 14:19:05.857037] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:28.642 [2024-11-27 14:19:05.857055] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:28.642 [2024-11-27 14:19:05.857068] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:19:28.642 BaseBdev1 00:19:28.642 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.642 14:19:05 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@775 -- # sleep 1 00:19:30.017 14:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:30.017 14:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:30.017 14:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:30.017 14:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:30.017 14:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:30.017 14:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:30.017 14:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:30.017 14:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:30.017 14:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:30.017 14:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:30.017 14:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.017 14:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.017 14:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.017 14:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.017 14:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.017 14:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:30.017 "name": "raid_bdev1", 00:19:30.017 "uuid": "bd428490-0891-4f3f-8471-c0b44331e3ca", 00:19:30.017 "strip_size_kb": 64, 00:19:30.017 "state": "online", 00:19:30.017 "raid_level": "raid5f", 00:19:30.017 "superblock": true, 00:19:30.017 "num_base_bdevs": 4, 00:19:30.017 "num_base_bdevs_discovered": 3, 00:19:30.017 "num_base_bdevs_operational": 3, 00:19:30.017 "base_bdevs_list": [ 00:19:30.017 { 00:19:30.017 "name": null, 00:19:30.017 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.017 "is_configured": false, 00:19:30.017 "data_offset": 0, 00:19:30.017 "data_size": 63488 00:19:30.017 }, 00:19:30.017 { 00:19:30.017 "name": "BaseBdev2", 00:19:30.017 "uuid": "deca87e1-aa0d-509b-b6a1-9cf67d35e39e", 00:19:30.017 "is_configured": true, 00:19:30.017 "data_offset": 2048, 00:19:30.017 "data_size": 63488 00:19:30.017 }, 00:19:30.017 { 00:19:30.017 "name": "BaseBdev3", 00:19:30.017 "uuid": "0d565e59-0d1f-589e-89e8-e02b82ef658d", 00:19:30.017 "is_configured": true, 00:19:30.017 "data_offset": 2048, 00:19:30.017 "data_size": 63488 00:19:30.017 }, 00:19:30.017 { 00:19:30.017 "name": "BaseBdev4", 00:19:30.017 "uuid": "def5fc4a-78ea-51a1-80d1-481791164be7", 00:19:30.017 "is_configured": true, 00:19:30.017 "data_offset": 2048, 00:19:30.017 "data_size": 63488 00:19:30.017 } 00:19:30.017 ] 00:19:30.017 }' 00:19:30.017 14:19:06 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:30.017 14:19:06 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.275 14:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:30.275 14:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:30.275 14:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:30.275 14:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:30.275 14:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:30.275 14:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:30.275 14:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.276 14:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.276 14:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:30.276 14:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.276 14:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:30.276 "name": "raid_bdev1", 00:19:30.276 "uuid": "bd428490-0891-4f3f-8471-c0b44331e3ca", 00:19:30.276 "strip_size_kb": 64, 00:19:30.276 "state": "online", 00:19:30.276 "raid_level": "raid5f", 00:19:30.276 "superblock": true, 00:19:30.276 "num_base_bdevs": 4, 00:19:30.276 "num_base_bdevs_discovered": 3, 00:19:30.276 "num_base_bdevs_operational": 3, 00:19:30.276 "base_bdevs_list": [ 00:19:30.276 { 00:19:30.276 "name": null, 00:19:30.276 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:30.276 "is_configured": false, 00:19:30.276 "data_offset": 0, 00:19:30.276 "data_size": 63488 00:19:30.276 }, 00:19:30.276 { 00:19:30.276 "name": "BaseBdev2", 00:19:30.276 "uuid": "deca87e1-aa0d-509b-b6a1-9cf67d35e39e", 00:19:30.276 "is_configured": true, 00:19:30.276 "data_offset": 2048, 00:19:30.276 "data_size": 63488 00:19:30.276 }, 00:19:30.276 { 00:19:30.276 "name": "BaseBdev3", 00:19:30.276 "uuid": "0d565e59-0d1f-589e-89e8-e02b82ef658d", 00:19:30.276 "is_configured": true, 00:19:30.276 "data_offset": 2048, 00:19:30.276 "data_size": 63488 00:19:30.276 }, 00:19:30.276 { 00:19:30.276 "name": "BaseBdev4", 00:19:30.276 "uuid": "def5fc4a-78ea-51a1-80d1-481791164be7", 00:19:30.276 "is_configured": true, 00:19:30.276 "data_offset": 2048, 00:19:30.276 "data_size": 63488 00:19:30.276 } 00:19:30.276 ] 00:19:30.276 }' 00:19:30.276 14:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:30.276 14:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:30.276 14:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:30.534 14:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:30.534 14:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:30.534 14:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@652 -- # local es=0 00:19:30.534 14:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:30.534 14:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:30.534 14:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:30.534 14:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:30.534 14:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:30.534 14:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:19:30.534 14:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.534 14:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:30.534 [2024-11-27 14:19:07.576668] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:30.534 [2024-11-27 14:19:07.576952] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:19:30.534 [2024-11-27 14:19:07.576976] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:19:30.534 request: 00:19:30.534 { 00:19:30.534 "base_bdev": "BaseBdev1", 00:19:30.534 "raid_bdev": "raid_bdev1", 00:19:30.534 "method": "bdev_raid_add_base_bdev", 00:19:30.534 "req_id": 1 00:19:30.534 } 00:19:30.534 Got JSON-RPC error response 00:19:30.534 response: 00:19:30.534 { 00:19:30.534 "code": -22, 00:19:30.534 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:19:30.534 } 00:19:30.534 14:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:30.534 14:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@655 -- # es=1 00:19:30.534 14:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:30.534 14:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:30.534 14:19:07 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:30.534 14:19:07 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@779 -- # sleep 1 00:19:31.469 14:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid5f 64 3 00:19:31.469 14:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:31.469 14:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:31.469 14:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@105 -- # local raid_level=raid5f 00:19:31.469 14:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@106 -- # local strip_size=64 00:19:31.469 14:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=3 00:19:31.469 14:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:31.469 14:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:31.469 14:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:31.470 14:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:31.470 14:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:31.470 14:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:31.470 14:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.470 14:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:31.470 14:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.470 14:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:31.470 "name": "raid_bdev1", 00:19:31.470 "uuid": "bd428490-0891-4f3f-8471-c0b44331e3ca", 00:19:31.470 "strip_size_kb": 64, 00:19:31.470 "state": "online", 00:19:31.470 "raid_level": "raid5f", 00:19:31.470 "superblock": true, 00:19:31.470 "num_base_bdevs": 4, 00:19:31.470 "num_base_bdevs_discovered": 3, 00:19:31.470 "num_base_bdevs_operational": 3, 00:19:31.470 "base_bdevs_list": [ 00:19:31.470 { 00:19:31.470 "name": null, 00:19:31.470 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:31.470 "is_configured": false, 00:19:31.470 "data_offset": 0, 00:19:31.470 "data_size": 63488 00:19:31.470 }, 00:19:31.470 { 00:19:31.470 "name": "BaseBdev2", 00:19:31.470 "uuid": "deca87e1-aa0d-509b-b6a1-9cf67d35e39e", 00:19:31.470 "is_configured": true, 00:19:31.470 "data_offset": 2048, 00:19:31.470 "data_size": 63488 00:19:31.470 }, 00:19:31.470 { 00:19:31.470 "name": "BaseBdev3", 00:19:31.470 "uuid": "0d565e59-0d1f-589e-89e8-e02b82ef658d", 00:19:31.470 "is_configured": true, 00:19:31.470 "data_offset": 2048, 00:19:31.470 "data_size": 63488 00:19:31.470 }, 00:19:31.470 { 00:19:31.470 "name": "BaseBdev4", 00:19:31.470 "uuid": "def5fc4a-78ea-51a1-80d1-481791164be7", 00:19:31.470 "is_configured": true, 00:19:31.470 "data_offset": 2048, 00:19:31.470 "data_size": 63488 00:19:31.470 } 00:19:31.470 ] 00:19:31.470 }' 00:19:31.470 14:19:08 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:31.470 14:19:08 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.038 14:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:32.038 14:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:32.038 14:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:32.038 14:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:32.038 14:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:32.038 14:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:32.039 14:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:32.039 14:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.039 14:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:32.039 14:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.039 14:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:32.039 "name": "raid_bdev1", 00:19:32.039 "uuid": "bd428490-0891-4f3f-8471-c0b44331e3ca", 00:19:32.039 "strip_size_kb": 64, 00:19:32.039 "state": "online", 00:19:32.039 "raid_level": "raid5f", 00:19:32.039 "superblock": true, 00:19:32.039 "num_base_bdevs": 4, 00:19:32.039 "num_base_bdevs_discovered": 3, 00:19:32.039 "num_base_bdevs_operational": 3, 00:19:32.039 "base_bdevs_list": [ 00:19:32.039 { 00:19:32.039 "name": null, 00:19:32.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:32.039 "is_configured": false, 00:19:32.039 "data_offset": 0, 00:19:32.039 "data_size": 63488 00:19:32.039 }, 00:19:32.039 { 00:19:32.039 "name": "BaseBdev2", 00:19:32.039 "uuid": "deca87e1-aa0d-509b-b6a1-9cf67d35e39e", 00:19:32.039 "is_configured": true, 00:19:32.039 "data_offset": 2048, 00:19:32.039 "data_size": 63488 00:19:32.039 }, 00:19:32.039 { 00:19:32.039 "name": "BaseBdev3", 00:19:32.039 "uuid": "0d565e59-0d1f-589e-89e8-e02b82ef658d", 00:19:32.039 "is_configured": true, 00:19:32.039 "data_offset": 2048, 00:19:32.039 "data_size": 63488 00:19:32.039 }, 00:19:32.039 { 00:19:32.039 "name": "BaseBdev4", 00:19:32.039 "uuid": "def5fc4a-78ea-51a1-80d1-481791164be7", 00:19:32.039 "is_configured": true, 00:19:32.039 "data_offset": 2048, 00:19:32.039 "data_size": 63488 00:19:32.039 } 00:19:32.039 ] 00:19:32.039 }' 00:19:32.039 14:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:32.039 14:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:32.039 14:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:32.039 14:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:32.039 14:19:09 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@784 -- # killprocess 85420 00:19:32.039 14:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@954 -- # '[' -z 85420 ']' 00:19:32.039 14:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@958 -- # kill -0 85420 00:19:32.039 14:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # uname 00:19:32.039 14:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:32.039 14:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85420 00:19:32.298 killing process with pid 85420 00:19:32.298 Received shutdown signal, test time was about 60.000000 seconds 00:19:32.298 00:19:32.298 Latency(us) 00:19:32.298 [2024-11-27T14:19:09.576Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:32.298 [2024-11-27T14:19:09.576Z] =================================================================================================================== 00:19:32.298 [2024-11-27T14:19:09.576Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:32.298 14:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:32.298 14:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:32.298 14:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85420' 00:19:32.298 14:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@973 -- # kill 85420 00:19:32.298 [2024-11-27 14:19:09.316080] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:32.298 14:19:09 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@978 -- # wait 85420 00:19:32.298 [2024-11-27 14:19:09.316250] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:32.298 [2024-11-27 14:19:09.316376] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:32.298 [2024-11-27 14:19:09.316412] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:19:32.557 [2024-11-27 14:19:09.716519] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:33.496 14:19:10 bdev_raid.raid5f_rebuild_test_sb -- bdev/bdev_raid.sh@786 -- # return 0 00:19:33.496 00:19:33.496 real 0m28.901s 00:19:33.496 user 0m37.869s 00:19:33.496 sys 0m2.936s 00:19:33.496 14:19:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:33.496 ************************************ 00:19:33.496 END TEST raid5f_rebuild_test_sb 00:19:33.496 ************************************ 00:19:33.496 14:19:10 bdev_raid.raid5f_rebuild_test_sb -- common/autotest_common.sh@10 -- # set +x 00:19:33.496 14:19:10 bdev_raid -- bdev/bdev_raid.sh@995 -- # base_blocklen=4096 00:19:33.496 14:19:10 bdev_raid -- bdev/bdev_raid.sh@997 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:19:33.496 14:19:10 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:19:33.496 14:19:10 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:33.496 14:19:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:33.496 ************************************ 00:19:33.496 START TEST raid_state_function_test_sb_4k 00:19:33.496 ************************************ 00:19:33.496 14:19:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:19:33.496 14:19:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:19:33.496 14:19:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:19:33.496 14:19:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:19:33.496 14:19:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:19:33.496 14:19:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:19:33.496 14:19:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:33.496 14:19:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:19:33.496 14:19:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:33.496 14:19:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:33.496 14:19:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:19:33.496 14:19:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:19:33.496 14:19:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:19:33.496 14:19:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:33.496 14:19:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:19:33.496 14:19:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:19:33.496 14:19:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@211 -- # local strip_size 00:19:33.496 14:19:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:19:33.496 14:19:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:19:33.496 14:19:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:19:33.496 14:19:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:19:33.496 14:19:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:19:33.496 14:19:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:19:33.496 14:19:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@229 -- # raid_pid=86243 00:19:33.496 14:19:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:19:33.496 Process raid pid: 86243 00:19:33.496 14:19:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 86243' 00:19:33.496 14:19:10 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@231 -- # waitforlisten 86243 00:19:33.496 14:19:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86243 ']' 00:19:33.496 14:19:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:33.496 14:19:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:33.496 14:19:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:33.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:33.496 14:19:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:33.496 14:19:10 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:33.756 [2024-11-27 14:19:10.869666] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:19:33.756 [2024-11-27 14:19:10.870570] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:34.015 [2024-11-27 14:19:11.056371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.015 [2024-11-27 14:19:11.181669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:34.273 [2024-11-27 14:19:11.374473] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:34.273 [2024-11-27 14:19:11.374522] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:34.841 14:19:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:34.841 14:19:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:19:34.841 14:19:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:34.841 14:19:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.841 14:19:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:34.841 [2024-11-27 14:19:11.888679] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:34.841 [2024-11-27 14:19:11.888740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:34.841 [2024-11-27 14:19:11.888758] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:34.841 [2024-11-27 14:19:11.888785] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:34.841 14:19:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.841 14:19:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:34.841 14:19:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:34.841 14:19:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:34.841 14:19:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:34.841 14:19:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:34.841 14:19:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:34.841 14:19:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:34.841 14:19:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:34.841 14:19:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:34.841 14:19:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:34.841 14:19:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:34.841 14:19:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.841 14:19:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:34.841 14:19:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:34.841 14:19:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.841 14:19:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:34.841 "name": "Existed_Raid", 00:19:34.841 "uuid": "6f561238-50dd-496b-81ca-0ffe1a96d5eb", 00:19:34.841 "strip_size_kb": 0, 00:19:34.841 "state": "configuring", 00:19:34.841 "raid_level": "raid1", 00:19:34.841 "superblock": true, 00:19:34.841 "num_base_bdevs": 2, 00:19:34.841 "num_base_bdevs_discovered": 0, 00:19:34.841 "num_base_bdevs_operational": 2, 00:19:34.841 "base_bdevs_list": [ 00:19:34.841 { 00:19:34.841 "name": "BaseBdev1", 00:19:34.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.841 "is_configured": false, 00:19:34.841 "data_offset": 0, 00:19:34.841 "data_size": 0 00:19:34.841 }, 00:19:34.841 { 00:19:34.841 "name": "BaseBdev2", 00:19:34.841 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:34.841 "is_configured": false, 00:19:34.841 "data_offset": 0, 00:19:34.841 "data_size": 0 00:19:34.841 } 00:19:34.841 ] 00:19:34.841 }' 00:19:34.841 14:19:11 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:34.841 14:19:11 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:35.409 [2024-11-27 14:19:12.392758] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:35.409 [2024-11-27 14:19:12.392829] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:35.409 [2024-11-27 14:19:12.400717] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:19:35.409 [2024-11-27 14:19:12.400781] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:19:35.409 [2024-11-27 14:19:12.400829] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:35.409 [2024-11-27 14:19:12.400849] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:35.409 [2024-11-27 14:19:12.444413] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:35.409 BaseBdev1 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:35.409 [ 00:19:35.409 { 00:19:35.409 "name": "BaseBdev1", 00:19:35.409 "aliases": [ 00:19:35.409 "5e83f741-f901-4f80-a546-1908e44a78c9" 00:19:35.409 ], 00:19:35.409 "product_name": "Malloc disk", 00:19:35.409 "block_size": 4096, 00:19:35.409 "num_blocks": 8192, 00:19:35.409 "uuid": "5e83f741-f901-4f80-a546-1908e44a78c9", 00:19:35.409 "assigned_rate_limits": { 00:19:35.409 "rw_ios_per_sec": 0, 00:19:35.409 "rw_mbytes_per_sec": 0, 00:19:35.409 "r_mbytes_per_sec": 0, 00:19:35.409 "w_mbytes_per_sec": 0 00:19:35.409 }, 00:19:35.409 "claimed": true, 00:19:35.409 "claim_type": "exclusive_write", 00:19:35.409 "zoned": false, 00:19:35.409 "supported_io_types": { 00:19:35.409 "read": true, 00:19:35.409 "write": true, 00:19:35.409 "unmap": true, 00:19:35.409 "flush": true, 00:19:35.409 "reset": true, 00:19:35.409 "nvme_admin": false, 00:19:35.409 "nvme_io": false, 00:19:35.409 "nvme_io_md": false, 00:19:35.409 "write_zeroes": true, 00:19:35.409 "zcopy": true, 00:19:35.409 "get_zone_info": false, 00:19:35.409 "zone_management": false, 00:19:35.409 "zone_append": false, 00:19:35.409 "compare": false, 00:19:35.409 "compare_and_write": false, 00:19:35.409 "abort": true, 00:19:35.409 "seek_hole": false, 00:19:35.409 "seek_data": false, 00:19:35.409 "copy": true, 00:19:35.409 "nvme_iov_md": false 00:19:35.409 }, 00:19:35.409 "memory_domains": [ 00:19:35.409 { 00:19:35.409 "dma_device_id": "system", 00:19:35.409 "dma_device_type": 1 00:19:35.409 }, 00:19:35.409 { 00:19:35.409 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:35.409 "dma_device_type": 2 00:19:35.409 } 00:19:35.409 ], 00:19:35.409 "driver_specific": {} 00:19:35.409 } 00:19:35.409 ] 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.409 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.410 "name": "Existed_Raid", 00:19:35.410 "uuid": "cc49ce45-c385-44eb-b7dc-4d3c9e493039", 00:19:35.410 "strip_size_kb": 0, 00:19:35.410 "state": "configuring", 00:19:35.410 "raid_level": "raid1", 00:19:35.410 "superblock": true, 00:19:35.410 "num_base_bdevs": 2, 00:19:35.410 "num_base_bdevs_discovered": 1, 00:19:35.410 "num_base_bdevs_operational": 2, 00:19:35.410 "base_bdevs_list": [ 00:19:35.410 { 00:19:35.410 "name": "BaseBdev1", 00:19:35.410 "uuid": "5e83f741-f901-4f80-a546-1908e44a78c9", 00:19:35.410 "is_configured": true, 00:19:35.410 "data_offset": 256, 00:19:35.410 "data_size": 7936 00:19:35.410 }, 00:19:35.410 { 00:19:35.410 "name": "BaseBdev2", 00:19:35.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.410 "is_configured": false, 00:19:35.410 "data_offset": 0, 00:19:35.410 "data_size": 0 00:19:35.410 } 00:19:35.410 ] 00:19:35.410 }' 00:19:35.410 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.410 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:35.977 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:19:35.977 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.977 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:35.977 [2024-11-27 14:19:12.988632] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:19:35.977 [2024-11-27 14:19:12.988690] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:19:35.977 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.977 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:19:35.977 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.977 14:19:12 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:35.977 [2024-11-27 14:19:13.000684] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:35.977 [2024-11-27 14:19:13.003343] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:19:35.977 [2024-11-27 14:19:13.003600] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:19:35.977 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.977 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:19:35.977 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:35.977 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:19:35.977 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:35.977 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:35.978 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:35.978 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:35.978 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:35.978 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:35.978 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:35.978 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:35.978 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:35.978 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:35.978 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:35.978 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.978 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:35.978 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.978 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:35.978 "name": "Existed_Raid", 00:19:35.978 "uuid": "9dd68473-5431-48c0-9da1-9b0a6a81f1da", 00:19:35.978 "strip_size_kb": 0, 00:19:35.978 "state": "configuring", 00:19:35.978 "raid_level": "raid1", 00:19:35.978 "superblock": true, 00:19:35.978 "num_base_bdevs": 2, 00:19:35.978 "num_base_bdevs_discovered": 1, 00:19:35.978 "num_base_bdevs_operational": 2, 00:19:35.978 "base_bdevs_list": [ 00:19:35.978 { 00:19:35.978 "name": "BaseBdev1", 00:19:35.978 "uuid": "5e83f741-f901-4f80-a546-1908e44a78c9", 00:19:35.978 "is_configured": true, 00:19:35.978 "data_offset": 256, 00:19:35.978 "data_size": 7936 00:19:35.978 }, 00:19:35.978 { 00:19:35.978 "name": "BaseBdev2", 00:19:35.978 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:35.978 "is_configured": false, 00:19:35.978 "data_offset": 0, 00:19:35.978 "data_size": 0 00:19:35.978 } 00:19:35.978 ] 00:19:35.978 }' 00:19:35.978 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:35.978 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:36.555 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2 00:19:36.555 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.555 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:36.555 [2024-11-27 14:19:13.587306] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:36.555 [2024-11-27 14:19:13.587610] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:36.555 [2024-11-27 14:19:13.587629] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:36.555 BaseBdev2 00:19:36.555 [2024-11-27 14:19:13.588003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:36.555 [2024-11-27 14:19:13.588266] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:36.555 [2024-11-27 14:19:13.588297] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:19:36.555 [2024-11-27 14:19:13.588486] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:36.555 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.555 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:19:36.555 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:19:36.555 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:36.555 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # local i 00:19:36.555 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:36.555 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:36.555 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:19:36.555 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.555 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:36.556 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.556 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:19:36.556 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.556 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:36.556 [ 00:19:36.556 { 00:19:36.556 "name": "BaseBdev2", 00:19:36.556 "aliases": [ 00:19:36.556 "0f980681-3d66-4ef4-ad08-0a4a92bda414" 00:19:36.556 ], 00:19:36.556 "product_name": "Malloc disk", 00:19:36.556 "block_size": 4096, 00:19:36.556 "num_blocks": 8192, 00:19:36.556 "uuid": "0f980681-3d66-4ef4-ad08-0a4a92bda414", 00:19:36.556 "assigned_rate_limits": { 00:19:36.556 "rw_ios_per_sec": 0, 00:19:36.556 "rw_mbytes_per_sec": 0, 00:19:36.556 "r_mbytes_per_sec": 0, 00:19:36.556 "w_mbytes_per_sec": 0 00:19:36.556 }, 00:19:36.556 "claimed": true, 00:19:36.556 "claim_type": "exclusive_write", 00:19:36.556 "zoned": false, 00:19:36.556 "supported_io_types": { 00:19:36.556 "read": true, 00:19:36.556 "write": true, 00:19:36.556 "unmap": true, 00:19:36.556 "flush": true, 00:19:36.556 "reset": true, 00:19:36.556 "nvme_admin": false, 00:19:36.556 "nvme_io": false, 00:19:36.556 "nvme_io_md": false, 00:19:36.556 "write_zeroes": true, 00:19:36.556 "zcopy": true, 00:19:36.556 "get_zone_info": false, 00:19:36.556 "zone_management": false, 00:19:36.556 "zone_append": false, 00:19:36.556 "compare": false, 00:19:36.556 "compare_and_write": false, 00:19:36.556 "abort": true, 00:19:36.556 "seek_hole": false, 00:19:36.556 "seek_data": false, 00:19:36.556 "copy": true, 00:19:36.556 "nvme_iov_md": false 00:19:36.556 }, 00:19:36.556 "memory_domains": [ 00:19:36.556 { 00:19:36.556 "dma_device_id": "system", 00:19:36.556 "dma_device_type": 1 00:19:36.556 }, 00:19:36.556 { 00:19:36.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:36.556 "dma_device_type": 2 00:19:36.556 } 00:19:36.556 ], 00:19:36.556 "driver_specific": {} 00:19:36.556 } 00:19:36.556 ] 00:19:36.556 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.556 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@911 -- # return 0 00:19:36.556 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:19:36.556 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:19:36.556 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:19:36.556 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:36.556 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:36.556 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:36.556 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:36.556 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:36.556 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:36.556 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:36.556 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:36.556 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:36.556 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:36.556 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:36.556 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.556 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:36.556 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.556 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:36.556 "name": "Existed_Raid", 00:19:36.556 "uuid": "9dd68473-5431-48c0-9da1-9b0a6a81f1da", 00:19:36.556 "strip_size_kb": 0, 00:19:36.556 "state": "online", 00:19:36.556 "raid_level": "raid1", 00:19:36.556 "superblock": true, 00:19:36.556 "num_base_bdevs": 2, 00:19:36.556 "num_base_bdevs_discovered": 2, 00:19:36.556 "num_base_bdevs_operational": 2, 00:19:36.556 "base_bdevs_list": [ 00:19:36.556 { 00:19:36.556 "name": "BaseBdev1", 00:19:36.556 "uuid": "5e83f741-f901-4f80-a546-1908e44a78c9", 00:19:36.556 "is_configured": true, 00:19:36.556 "data_offset": 256, 00:19:36.556 "data_size": 7936 00:19:36.556 }, 00:19:36.556 { 00:19:36.556 "name": "BaseBdev2", 00:19:36.556 "uuid": "0f980681-3d66-4ef4-ad08-0a4a92bda414", 00:19:36.556 "is_configured": true, 00:19:36.556 "data_offset": 256, 00:19:36.556 "data_size": 7936 00:19:36.556 } 00:19:36.556 ] 00:19:36.556 }' 00:19:36.556 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:36.556 14:19:13 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:37.126 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:19:37.126 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:19:37.126 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:37.126 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:37.126 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:37.126 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:37.126 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:19:37.126 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.126 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:37.126 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:37.126 [2024-11-27 14:19:14.179981] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:37.126 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.126 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:37.126 "name": "Existed_Raid", 00:19:37.126 "aliases": [ 00:19:37.126 "9dd68473-5431-48c0-9da1-9b0a6a81f1da" 00:19:37.126 ], 00:19:37.126 "product_name": "Raid Volume", 00:19:37.126 "block_size": 4096, 00:19:37.126 "num_blocks": 7936, 00:19:37.126 "uuid": "9dd68473-5431-48c0-9da1-9b0a6a81f1da", 00:19:37.126 "assigned_rate_limits": { 00:19:37.126 "rw_ios_per_sec": 0, 00:19:37.126 "rw_mbytes_per_sec": 0, 00:19:37.126 "r_mbytes_per_sec": 0, 00:19:37.126 "w_mbytes_per_sec": 0 00:19:37.126 }, 00:19:37.126 "claimed": false, 00:19:37.126 "zoned": false, 00:19:37.126 "supported_io_types": { 00:19:37.126 "read": true, 00:19:37.126 "write": true, 00:19:37.126 "unmap": false, 00:19:37.126 "flush": false, 00:19:37.126 "reset": true, 00:19:37.126 "nvme_admin": false, 00:19:37.126 "nvme_io": false, 00:19:37.126 "nvme_io_md": false, 00:19:37.126 "write_zeroes": true, 00:19:37.126 "zcopy": false, 00:19:37.126 "get_zone_info": false, 00:19:37.126 "zone_management": false, 00:19:37.126 "zone_append": false, 00:19:37.126 "compare": false, 00:19:37.126 "compare_and_write": false, 00:19:37.126 "abort": false, 00:19:37.126 "seek_hole": false, 00:19:37.126 "seek_data": false, 00:19:37.126 "copy": false, 00:19:37.126 "nvme_iov_md": false 00:19:37.126 }, 00:19:37.126 "memory_domains": [ 00:19:37.126 { 00:19:37.126 "dma_device_id": "system", 00:19:37.126 "dma_device_type": 1 00:19:37.126 }, 00:19:37.126 { 00:19:37.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.126 "dma_device_type": 2 00:19:37.126 }, 00:19:37.126 { 00:19:37.126 "dma_device_id": "system", 00:19:37.126 "dma_device_type": 1 00:19:37.126 }, 00:19:37.126 { 00:19:37.126 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:37.126 "dma_device_type": 2 00:19:37.126 } 00:19:37.126 ], 00:19:37.126 "driver_specific": { 00:19:37.126 "raid": { 00:19:37.126 "uuid": "9dd68473-5431-48c0-9da1-9b0a6a81f1da", 00:19:37.126 "strip_size_kb": 0, 00:19:37.126 "state": "online", 00:19:37.126 "raid_level": "raid1", 00:19:37.126 "superblock": true, 00:19:37.126 "num_base_bdevs": 2, 00:19:37.126 "num_base_bdevs_discovered": 2, 00:19:37.126 "num_base_bdevs_operational": 2, 00:19:37.126 "base_bdevs_list": [ 00:19:37.126 { 00:19:37.126 "name": "BaseBdev1", 00:19:37.126 "uuid": "5e83f741-f901-4f80-a546-1908e44a78c9", 00:19:37.126 "is_configured": true, 00:19:37.126 "data_offset": 256, 00:19:37.126 "data_size": 7936 00:19:37.126 }, 00:19:37.126 { 00:19:37.126 "name": "BaseBdev2", 00:19:37.126 "uuid": "0f980681-3d66-4ef4-ad08-0a4a92bda414", 00:19:37.126 "is_configured": true, 00:19:37.126 "data_offset": 256, 00:19:37.126 "data_size": 7936 00:19:37.126 } 00:19:37.126 ] 00:19:37.126 } 00:19:37.126 } 00:19:37.126 }' 00:19:37.126 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:37.126 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:19:37.126 BaseBdev2' 00:19:37.126 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:37.126 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:37.126 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:37.126 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:19:37.126 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:37.126 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.126 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:37.126 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.126 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:37.126 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:37.126 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:37.126 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:37.126 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:19:37.126 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.126 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:37.385 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.385 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:37.385 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:37.385 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:19:37.385 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.385 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:37.385 [2024-11-27 14:19:14.459857] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:37.385 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.385 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # local expected_state 00:19:37.385 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:19:37.385 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:37.385 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:19:37.385 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:19:37.385 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:19:37.385 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:19:37.385 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:37.385 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:37.385 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:37.385 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:37.385 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:37.385 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:37.385 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:37.385 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:37.385 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.385 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:19:37.385 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.385 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:37.385 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.385 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:37.385 "name": "Existed_Raid", 00:19:37.385 "uuid": "9dd68473-5431-48c0-9da1-9b0a6a81f1da", 00:19:37.385 "strip_size_kb": 0, 00:19:37.385 "state": "online", 00:19:37.385 "raid_level": "raid1", 00:19:37.385 "superblock": true, 00:19:37.385 "num_base_bdevs": 2, 00:19:37.385 "num_base_bdevs_discovered": 1, 00:19:37.385 "num_base_bdevs_operational": 1, 00:19:37.385 "base_bdevs_list": [ 00:19:37.385 { 00:19:37.385 "name": null, 00:19:37.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:37.385 "is_configured": false, 00:19:37.385 "data_offset": 0, 00:19:37.385 "data_size": 7936 00:19:37.385 }, 00:19:37.385 { 00:19:37.385 "name": "BaseBdev2", 00:19:37.385 "uuid": "0f980681-3d66-4ef4-ad08-0a4a92bda414", 00:19:37.385 "is_configured": true, 00:19:37.385 "data_offset": 256, 00:19:37.385 "data_size": 7936 00:19:37.385 } 00:19:37.385 ] 00:19:37.385 }' 00:19:37.385 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:37.385 14:19:14 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:37.952 14:19:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:19:37.952 14:19:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:37.952 14:19:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:37.952 14:19:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.952 14:19:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:19:37.952 14:19:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:37.952 14:19:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.952 14:19:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:19:37.952 14:19:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:19:37.952 14:19:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:19:37.953 14:19:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.953 14:19:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:37.953 [2024-11-27 14:19:15.145507] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:19:37.953 [2024-11-27 14:19:15.145628] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:38.212 [2024-11-27 14:19:15.231331] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:38.212 [2024-11-27 14:19:15.231594] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:38.212 [2024-11-27 14:19:15.231750] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:19:38.212 14:19:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.212 14:19:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:19:38.212 14:19:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:19:38.212 14:19:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:38.212 14:19:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:19:38.212 14:19:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.212 14:19:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:38.212 14:19:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.212 14:19:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:19:38.212 14:19:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:19:38.212 14:19:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:19:38.212 14:19:15 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@326 -- # killprocess 86243 00:19:38.212 14:19:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86243 ']' 00:19:38.212 14:19:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86243 00:19:38.212 14:19:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:19:38.212 14:19:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:38.212 14:19:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86243 00:19:38.212 killing process with pid 86243 00:19:38.212 14:19:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:38.212 14:19:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:38.212 14:19:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86243' 00:19:38.212 14:19:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86243 00:19:38.212 [2024-11-27 14:19:15.328211] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:38.212 14:19:15 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86243 00:19:38.212 [2024-11-27 14:19:15.343598] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:39.201 14:19:16 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@328 -- # return 0 00:19:39.201 00:19:39.201 real 0m5.645s 00:19:39.201 user 0m8.544s 00:19:39.201 sys 0m0.851s 00:19:39.201 14:19:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:39.201 ************************************ 00:19:39.201 END TEST raid_state_function_test_sb_4k 00:19:39.201 ************************************ 00:19:39.201 14:19:16 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:39.201 14:19:16 bdev_raid -- bdev/bdev_raid.sh@998 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:19:39.201 14:19:16 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:39.201 14:19:16 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:39.201 14:19:16 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:39.202 ************************************ 00:19:39.202 START TEST raid_superblock_test_4k 00:19:39.202 ************************************ 00:19:39.202 14:19:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:19:39.202 14:19:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:19:39.202 14:19:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:19:39.202 14:19:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:19:39.202 14:19:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:19:39.202 14:19:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:19:39.202 14:19:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:19:39.202 14:19:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:19:39.202 14:19:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:19:39.202 14:19:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:19:39.202 14:19:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size 00:19:39.202 14:19:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:19:39.202 14:19:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:19:39.202 14:19:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:19:39.202 14:19:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:19:39.202 14:19:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:19:39.202 14:19:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # raid_pid=86502 00:19:39.202 14:19:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:19:39.202 14:19:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@413 -- # waitforlisten 86502 00:19:39.202 14:19:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@835 -- # '[' -z 86502 ']' 00:19:39.202 14:19:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:39.202 14:19:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:39.202 14:19:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:39.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:39.202 14:19:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:39.202 14:19:16 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:39.461 [2024-11-27 14:19:16.558890] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:19:39.461 [2024-11-27 14:19:16.559403] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86502 ] 00:19:39.461 [2024-11-27 14:19:16.728447] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.720 [2024-11-27 14:19:16.862187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.979 [2024-11-27 14:19:17.066562] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:39.979 [2024-11-27 14:19:17.066604] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:40.547 14:19:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:40.547 14:19:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@868 -- # return 0 00:19:40.547 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:19:40.547 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:40.547 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:19:40.547 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:19:40.547 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:19:40.547 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:40.547 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:40.547 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:40.547 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc1 00:19:40.547 14:19:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.547 14:19:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:40.547 malloc1 00:19:40.547 14:19:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.547 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:40.547 14:19:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.547 14:19:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:40.547 [2024-11-27 14:19:17.628894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:40.547 [2024-11-27 14:19:17.629151] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.547 [2024-11-27 14:19:17.629305] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:40.547 [2024-11-27 14:19:17.629435] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.547 [2024-11-27 14:19:17.632286] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.547 [2024-11-27 14:19:17.632468] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:40.547 pt1 00:19:40.547 14:19:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.547 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:40.547 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:40.547 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:19:40.547 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:19:40.547 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:19:40.547 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:19:40.547 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:19:40.547 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:19:40.547 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc2 00:19:40.548 14:19:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.548 14:19:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:40.548 malloc2 00:19:40.548 14:19:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.548 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:40.548 14:19:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.548 14:19:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:40.548 [2024-11-27 14:19:17.682321] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:40.548 [2024-11-27 14:19:17.682417] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:40.548 [2024-11-27 14:19:17.682451] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:40.548 [2024-11-27 14:19:17.682465] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:40.548 [2024-11-27 14:19:17.685150] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:40.548 [2024-11-27 14:19:17.685190] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:40.548 pt2 00:19:40.548 14:19:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.548 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:19:40.548 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:19:40.548 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:19:40.548 14:19:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.548 14:19:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:40.548 [2024-11-27 14:19:17.694380] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:40.548 [2024-11-27 14:19:17.696805] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:40.548 [2024-11-27 14:19:17.697015] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:40.548 [2024-11-27 14:19:17.697036] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:40.548 [2024-11-27 14:19:17.697313] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:19:40.548 [2024-11-27 14:19:17.697491] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:40.548 [2024-11-27 14:19:17.697513] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:40.548 [2024-11-27 14:19:17.697682] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:40.548 14:19:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.548 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:40.548 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:40.548 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:40.548 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:40.548 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:40.548 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:40.548 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:40.548 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:40.548 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:40.548 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:40.548 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:40.548 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:40.548 14:19:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.548 14:19:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:40.548 14:19:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.548 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:40.548 "name": "raid_bdev1", 00:19:40.548 "uuid": "a201d1c5-79ca-4bf1-9c3a-529fb0f1c7b0", 00:19:40.548 "strip_size_kb": 0, 00:19:40.548 "state": "online", 00:19:40.548 "raid_level": "raid1", 00:19:40.548 "superblock": true, 00:19:40.548 "num_base_bdevs": 2, 00:19:40.548 "num_base_bdevs_discovered": 2, 00:19:40.548 "num_base_bdevs_operational": 2, 00:19:40.548 "base_bdevs_list": [ 00:19:40.548 { 00:19:40.548 "name": "pt1", 00:19:40.548 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:40.548 "is_configured": true, 00:19:40.548 "data_offset": 256, 00:19:40.548 "data_size": 7936 00:19:40.548 }, 00:19:40.548 { 00:19:40.548 "name": "pt2", 00:19:40.548 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:40.548 "is_configured": true, 00:19:40.548 "data_offset": 256, 00:19:40.548 "data_size": 7936 00:19:40.548 } 00:19:40.548 ] 00:19:40.548 }' 00:19:40.548 14:19:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:40.548 14:19:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.118 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:19:41.118 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:41.118 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:41.118 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:41.118 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:41.118 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:41.118 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:41.118 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.118 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.118 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:41.118 [2024-11-27 14:19:18.210927] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:41.118 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.118 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:41.118 "name": "raid_bdev1", 00:19:41.118 "aliases": [ 00:19:41.118 "a201d1c5-79ca-4bf1-9c3a-529fb0f1c7b0" 00:19:41.118 ], 00:19:41.118 "product_name": "Raid Volume", 00:19:41.118 "block_size": 4096, 00:19:41.118 "num_blocks": 7936, 00:19:41.118 "uuid": "a201d1c5-79ca-4bf1-9c3a-529fb0f1c7b0", 00:19:41.118 "assigned_rate_limits": { 00:19:41.118 "rw_ios_per_sec": 0, 00:19:41.118 "rw_mbytes_per_sec": 0, 00:19:41.118 "r_mbytes_per_sec": 0, 00:19:41.118 "w_mbytes_per_sec": 0 00:19:41.118 }, 00:19:41.118 "claimed": false, 00:19:41.118 "zoned": false, 00:19:41.118 "supported_io_types": { 00:19:41.118 "read": true, 00:19:41.118 "write": true, 00:19:41.118 "unmap": false, 00:19:41.118 "flush": false, 00:19:41.118 "reset": true, 00:19:41.118 "nvme_admin": false, 00:19:41.118 "nvme_io": false, 00:19:41.118 "nvme_io_md": false, 00:19:41.118 "write_zeroes": true, 00:19:41.118 "zcopy": false, 00:19:41.118 "get_zone_info": false, 00:19:41.118 "zone_management": false, 00:19:41.118 "zone_append": false, 00:19:41.118 "compare": false, 00:19:41.118 "compare_and_write": false, 00:19:41.118 "abort": false, 00:19:41.118 "seek_hole": false, 00:19:41.118 "seek_data": false, 00:19:41.118 "copy": false, 00:19:41.118 "nvme_iov_md": false 00:19:41.118 }, 00:19:41.118 "memory_domains": [ 00:19:41.118 { 00:19:41.118 "dma_device_id": "system", 00:19:41.118 "dma_device_type": 1 00:19:41.118 }, 00:19:41.118 { 00:19:41.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:41.118 "dma_device_type": 2 00:19:41.118 }, 00:19:41.118 { 00:19:41.118 "dma_device_id": "system", 00:19:41.118 "dma_device_type": 1 00:19:41.118 }, 00:19:41.118 { 00:19:41.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:41.118 "dma_device_type": 2 00:19:41.118 } 00:19:41.118 ], 00:19:41.118 "driver_specific": { 00:19:41.118 "raid": { 00:19:41.118 "uuid": "a201d1c5-79ca-4bf1-9c3a-529fb0f1c7b0", 00:19:41.118 "strip_size_kb": 0, 00:19:41.118 "state": "online", 00:19:41.118 "raid_level": "raid1", 00:19:41.118 "superblock": true, 00:19:41.118 "num_base_bdevs": 2, 00:19:41.118 "num_base_bdevs_discovered": 2, 00:19:41.118 "num_base_bdevs_operational": 2, 00:19:41.118 "base_bdevs_list": [ 00:19:41.118 { 00:19:41.118 "name": "pt1", 00:19:41.118 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:41.118 "is_configured": true, 00:19:41.118 "data_offset": 256, 00:19:41.118 "data_size": 7936 00:19:41.118 }, 00:19:41.118 { 00:19:41.118 "name": "pt2", 00:19:41.118 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:41.118 "is_configured": true, 00:19:41.118 "data_offset": 256, 00:19:41.118 "data_size": 7936 00:19:41.118 } 00:19:41.118 ] 00:19:41.118 } 00:19:41.118 } 00:19:41.118 }' 00:19:41.118 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:41.118 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:41.118 pt2' 00:19:41.118 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:41.118 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:41.118 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:41.118 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:41.118 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:41.118 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.118 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.118 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.118 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:41.118 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:41.118 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:41.118 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:41.118 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:41.118 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.118 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:19:41.378 [2024-11-27 14:19:18.450993] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=a201d1c5-79ca-4bf1-9c3a-529fb0f1c7b0 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@436 -- # '[' -z a201d1c5-79ca-4bf1-9c3a-529fb0f1c7b0 ']' 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.378 [2024-11-27 14:19:18.502591] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:41.378 [2024-11-27 14:19:18.502620] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:41.378 [2024-11-27 14:19:18.502716] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:41.378 [2024-11-27 14:19:18.502827] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:41.378 [2024-11-27 14:19:18.502848] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@652 -- # local es=0 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.378 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.378 [2024-11-27 14:19:18.638706] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:19:41.378 [2024-11-27 14:19:18.641472] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:19:41.378 [2024-11-27 14:19:18.641706] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:19:41.378 [2024-11-27 14:19:18.642036] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:19:41.378 [2024-11-27 14:19:18.642196] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:41.378 [2024-11-27 14:19:18.642300] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:19:41.378 request: 00:19:41.378 { 00:19:41.378 "name": "raid_bdev1", 00:19:41.378 "raid_level": "raid1", 00:19:41.378 "base_bdevs": [ 00:19:41.378 "malloc1", 00:19:41.379 "malloc2" 00:19:41.379 ], 00:19:41.379 "superblock": false, 00:19:41.379 "method": "bdev_raid_create", 00:19:41.379 "req_id": 1 00:19:41.379 } 00:19:41.379 Got JSON-RPC error response 00:19:41.379 response: 00:19:41.379 { 00:19:41.379 "code": -17, 00:19:41.379 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:19:41.379 } 00:19:41.379 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:19:41.379 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@655 -- # es=1 00:19:41.379 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:41.379 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:41.379 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:41.379 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.379 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.637 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:19:41.637 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.637 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.637 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:19:41.637 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:19:41.637 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:41.637 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.637 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.637 [2024-11-27 14:19:18.710700] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:41.637 [2024-11-27 14:19:18.710821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:41.637 [2024-11-27 14:19:18.710866] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:19:41.638 [2024-11-27 14:19:18.710886] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:41.638 [2024-11-27 14:19:18.713932] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:41.638 [2024-11-27 14:19:18.713996] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:41.638 [2024-11-27 14:19:18.714105] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:41.638 [2024-11-27 14:19:18.714223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:41.638 pt1 00:19:41.638 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.638 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:19:41.638 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:41.638 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:19:41.638 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:41.638 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:41.638 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:41.638 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:41.638 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:41.638 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:41.638 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:41.638 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:41.638 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:41.638 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.638 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:41.638 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.638 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:41.638 "name": "raid_bdev1", 00:19:41.638 "uuid": "a201d1c5-79ca-4bf1-9c3a-529fb0f1c7b0", 00:19:41.638 "strip_size_kb": 0, 00:19:41.638 "state": "configuring", 00:19:41.638 "raid_level": "raid1", 00:19:41.638 "superblock": true, 00:19:41.638 "num_base_bdevs": 2, 00:19:41.638 "num_base_bdevs_discovered": 1, 00:19:41.638 "num_base_bdevs_operational": 2, 00:19:41.638 "base_bdevs_list": [ 00:19:41.638 { 00:19:41.638 "name": "pt1", 00:19:41.638 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:41.638 "is_configured": true, 00:19:41.638 "data_offset": 256, 00:19:41.638 "data_size": 7936 00:19:41.638 }, 00:19:41.638 { 00:19:41.638 "name": null, 00:19:41.638 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:41.638 "is_configured": false, 00:19:41.638 "data_offset": 256, 00:19:41.638 "data_size": 7936 00:19:41.638 } 00:19:41.638 ] 00:19:41.638 }' 00:19:41.638 14:19:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:41.638 14:19:18 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:42.206 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:19:42.206 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:19:42.206 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:42.206 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:42.206 14:19:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.206 14:19:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:42.206 [2024-11-27 14:19:19.234874] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:42.206 [2024-11-27 14:19:19.234963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:42.206 [2024-11-27 14:19:19.234993] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:42.206 [2024-11-27 14:19:19.235010] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:42.206 [2024-11-27 14:19:19.235616] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:42.206 [2024-11-27 14:19:19.235652] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:42.206 [2024-11-27 14:19:19.235779] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:42.206 [2024-11-27 14:19:19.235835] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:42.206 [2024-11-27 14:19:19.235987] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:19:42.206 [2024-11-27 14:19:19.236013] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:42.206 [2024-11-27 14:19:19.236329] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:42.206 [2024-11-27 14:19:19.236531] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:19:42.206 [2024-11-27 14:19:19.236551] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:19:42.206 [2024-11-27 14:19:19.236728] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:42.206 pt2 00:19:42.206 14:19:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.206 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:19:42.206 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:19:42.206 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:42.206 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:42.206 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:42.206 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:42.206 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:42.206 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:42.206 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:42.206 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:42.206 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:42.206 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:42.206 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:42.206 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:42.206 14:19:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.206 14:19:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:42.206 14:19:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.206 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:42.206 "name": "raid_bdev1", 00:19:42.206 "uuid": "a201d1c5-79ca-4bf1-9c3a-529fb0f1c7b0", 00:19:42.206 "strip_size_kb": 0, 00:19:42.206 "state": "online", 00:19:42.206 "raid_level": "raid1", 00:19:42.206 "superblock": true, 00:19:42.206 "num_base_bdevs": 2, 00:19:42.206 "num_base_bdevs_discovered": 2, 00:19:42.206 "num_base_bdevs_operational": 2, 00:19:42.206 "base_bdevs_list": [ 00:19:42.206 { 00:19:42.206 "name": "pt1", 00:19:42.206 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:42.206 "is_configured": true, 00:19:42.206 "data_offset": 256, 00:19:42.206 "data_size": 7936 00:19:42.206 }, 00:19:42.206 { 00:19:42.206 "name": "pt2", 00:19:42.206 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:42.206 "is_configured": true, 00:19:42.206 "data_offset": 256, 00:19:42.206 "data_size": 7936 00:19:42.206 } 00:19:42.206 ] 00:19:42.206 }' 00:19:42.206 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:42.206 14:19:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:42.774 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:19:42.774 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:19:42.774 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:19:42.774 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:19:42.774 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@184 -- # local name 00:19:42.774 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:19:42.774 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:42.774 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:19:42.774 14:19:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.774 14:19:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:42.774 [2024-11-27 14:19:19.779320] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:42.774 14:19:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.774 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:19:42.774 "name": "raid_bdev1", 00:19:42.774 "aliases": [ 00:19:42.774 "a201d1c5-79ca-4bf1-9c3a-529fb0f1c7b0" 00:19:42.774 ], 00:19:42.774 "product_name": "Raid Volume", 00:19:42.774 "block_size": 4096, 00:19:42.774 "num_blocks": 7936, 00:19:42.774 "uuid": "a201d1c5-79ca-4bf1-9c3a-529fb0f1c7b0", 00:19:42.774 "assigned_rate_limits": { 00:19:42.774 "rw_ios_per_sec": 0, 00:19:42.774 "rw_mbytes_per_sec": 0, 00:19:42.774 "r_mbytes_per_sec": 0, 00:19:42.774 "w_mbytes_per_sec": 0 00:19:42.774 }, 00:19:42.774 "claimed": false, 00:19:42.774 "zoned": false, 00:19:42.774 "supported_io_types": { 00:19:42.774 "read": true, 00:19:42.774 "write": true, 00:19:42.774 "unmap": false, 00:19:42.774 "flush": false, 00:19:42.774 "reset": true, 00:19:42.774 "nvme_admin": false, 00:19:42.774 "nvme_io": false, 00:19:42.774 "nvme_io_md": false, 00:19:42.774 "write_zeroes": true, 00:19:42.774 "zcopy": false, 00:19:42.774 "get_zone_info": false, 00:19:42.774 "zone_management": false, 00:19:42.774 "zone_append": false, 00:19:42.774 "compare": false, 00:19:42.774 "compare_and_write": false, 00:19:42.774 "abort": false, 00:19:42.774 "seek_hole": false, 00:19:42.774 "seek_data": false, 00:19:42.774 "copy": false, 00:19:42.774 "nvme_iov_md": false 00:19:42.774 }, 00:19:42.774 "memory_domains": [ 00:19:42.774 { 00:19:42.774 "dma_device_id": "system", 00:19:42.774 "dma_device_type": 1 00:19:42.774 }, 00:19:42.774 { 00:19:42.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:42.774 "dma_device_type": 2 00:19:42.774 }, 00:19:42.774 { 00:19:42.774 "dma_device_id": "system", 00:19:42.774 "dma_device_type": 1 00:19:42.774 }, 00:19:42.774 { 00:19:42.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:19:42.774 "dma_device_type": 2 00:19:42.774 } 00:19:42.774 ], 00:19:42.774 "driver_specific": { 00:19:42.774 "raid": { 00:19:42.774 "uuid": "a201d1c5-79ca-4bf1-9c3a-529fb0f1c7b0", 00:19:42.774 "strip_size_kb": 0, 00:19:42.774 "state": "online", 00:19:42.774 "raid_level": "raid1", 00:19:42.774 "superblock": true, 00:19:42.774 "num_base_bdevs": 2, 00:19:42.774 "num_base_bdevs_discovered": 2, 00:19:42.774 "num_base_bdevs_operational": 2, 00:19:42.774 "base_bdevs_list": [ 00:19:42.774 { 00:19:42.774 "name": "pt1", 00:19:42.774 "uuid": "00000000-0000-0000-0000-000000000001", 00:19:42.774 "is_configured": true, 00:19:42.774 "data_offset": 256, 00:19:42.774 "data_size": 7936 00:19:42.774 }, 00:19:42.774 { 00:19:42.774 "name": "pt2", 00:19:42.774 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:42.774 "is_configured": true, 00:19:42.774 "data_offset": 256, 00:19:42.774 "data_size": 7936 00:19:42.774 } 00:19:42.774 ] 00:19:42.774 } 00:19:42.774 } 00:19:42.774 }' 00:19:42.774 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:19:42.774 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:19:42.774 pt2' 00:19:42.774 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:42.774 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 ' 00:19:42.774 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:42.774 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:19:42.774 14:19:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.774 14:19:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:42.774 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:42.774 14:19:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.774 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:42.774 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:42.774 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:19:42.774 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:19:42.774 14:19:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:19:42.774 14:19:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.774 14:19:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:42.774 14:19:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.774 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 ' 00:19:42.774 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@193 -- # [[ 4096 == \4\0\9\6\ \ \ ]] 00:19:42.774 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:42.774 14:19:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:42.774 14:19:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:42.774 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:19:43.035 [2024-11-27 14:19:20.051379] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:43.035 14:19:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.035 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@487 -- # '[' a201d1c5-79ca-4bf1-9c3a-529fb0f1c7b0 '!=' a201d1c5-79ca-4bf1-9c3a-529fb0f1c7b0 ']' 00:19:43.035 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:19:43.035 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # case $1 in 00:19:43.035 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@199 -- # return 0 00:19:43.035 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:19:43.035 14:19:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.035 14:19:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:43.035 [2024-11-27 14:19:20.099156] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:19:43.035 14:19:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.035 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:43.035 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:43.035 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:43.035 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:43.035 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:43.035 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:43.035 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.035 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.035 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.035 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.035 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.035 14:19:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.035 14:19:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:43.035 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.035 14:19:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.035 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.035 "name": "raid_bdev1", 00:19:43.035 "uuid": "a201d1c5-79ca-4bf1-9c3a-529fb0f1c7b0", 00:19:43.035 "strip_size_kb": 0, 00:19:43.035 "state": "online", 00:19:43.035 "raid_level": "raid1", 00:19:43.035 "superblock": true, 00:19:43.035 "num_base_bdevs": 2, 00:19:43.035 "num_base_bdevs_discovered": 1, 00:19:43.035 "num_base_bdevs_operational": 1, 00:19:43.035 "base_bdevs_list": [ 00:19:43.035 { 00:19:43.035 "name": null, 00:19:43.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.035 "is_configured": false, 00:19:43.035 "data_offset": 0, 00:19:43.035 "data_size": 7936 00:19:43.035 }, 00:19:43.035 { 00:19:43.035 "name": "pt2", 00:19:43.035 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:43.035 "is_configured": true, 00:19:43.035 "data_offset": 256, 00:19:43.035 "data_size": 7936 00:19:43.035 } 00:19:43.035 ] 00:19:43.035 }' 00:19:43.035 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.035 14:19:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:43.604 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:43.604 14:19:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.604 14:19:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:43.604 [2024-11-27 14:19:20.679245] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:43.604 [2024-11-27 14:19:20.679416] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:43.604 [2024-11-27 14:19:20.679540] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:43.604 [2024-11-27 14:19:20.679604] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:43.604 [2024-11-27 14:19:20.679624] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:19:43.604 14:19:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.604 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.604 14:19:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.604 14:19:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:43.604 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:19:43.604 14:19:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.604 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:19:43.604 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:19:43.604 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:19:43.604 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:43.604 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:19:43.604 14:19:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.604 14:19:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:43.604 14:19:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.604 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:19:43.604 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:19:43.604 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:19:43.605 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:19:43.605 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # i=1 00:19:43.605 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:19:43.605 14:19:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.605 14:19:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:43.605 [2024-11-27 14:19:20.751263] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:19:43.605 [2024-11-27 14:19:20.751574] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:43.605 [2024-11-27 14:19:20.751617] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:19:43.605 [2024-11-27 14:19:20.751635] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:43.605 [2024-11-27 14:19:20.754694] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:43.605 [2024-11-27 14:19:20.754895] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:19:43.605 [2024-11-27 14:19:20.755016] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:19:43.605 [2024-11-27 14:19:20.755084] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:43.605 [2024-11-27 14:19:20.755231] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:19:43.605 [2024-11-27 14:19:20.755254] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:43.605 [2024-11-27 14:19:20.755591] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:19:43.605 [2024-11-27 14:19:20.755771] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:19:43.605 [2024-11-27 14:19:20.755817] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:19:43.605 [2024-11-27 14:19:20.756079] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:43.605 pt2 00:19:43.605 14:19:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.605 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:43.605 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:43.605 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:43.605 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:43.605 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:43.605 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:43.605 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:43.605 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:43.605 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:43.605 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:43.605 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:43.605 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:43.605 14:19:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.605 14:19:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:43.605 14:19:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.605 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:43.605 "name": "raid_bdev1", 00:19:43.605 "uuid": "a201d1c5-79ca-4bf1-9c3a-529fb0f1c7b0", 00:19:43.605 "strip_size_kb": 0, 00:19:43.605 "state": "online", 00:19:43.605 "raid_level": "raid1", 00:19:43.605 "superblock": true, 00:19:43.605 "num_base_bdevs": 2, 00:19:43.605 "num_base_bdevs_discovered": 1, 00:19:43.605 "num_base_bdevs_operational": 1, 00:19:43.605 "base_bdevs_list": [ 00:19:43.605 { 00:19:43.605 "name": null, 00:19:43.605 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:43.605 "is_configured": false, 00:19:43.605 "data_offset": 256, 00:19:43.605 "data_size": 7936 00:19:43.605 }, 00:19:43.605 { 00:19:43.605 "name": "pt2", 00:19:43.605 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:43.605 "is_configured": true, 00:19:43.605 "data_offset": 256, 00:19:43.605 "data_size": 7936 00:19:43.605 } 00:19:43.605 ] 00:19:43.605 }' 00:19:43.605 14:19:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:43.605 14:19:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:44.174 [2024-11-27 14:19:21.279521] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:44.174 [2024-11-27 14:19:21.279715] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:44.174 [2024-11-27 14:19:21.279889] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:44.174 [2024-11-27 14:19:21.279961] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:44.174 [2024-11-27 14:19:21.279977] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:44.174 [2024-11-27 14:19:21.347554] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:19:44.174 [2024-11-27 14:19:21.347813] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:44.174 [2024-11-27 14:19:21.347856] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:19:44.174 [2024-11-27 14:19:21.347871] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:44.174 [2024-11-27 14:19:21.350997] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:44.174 [2024-11-27 14:19:21.351042] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:19:44.174 [2024-11-27 14:19:21.351161] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:19:44.174 [2024-11-27 14:19:21.351221] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:19:44.174 [2024-11-27 14:19:21.351398] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:19:44.174 [2024-11-27 14:19:21.351417] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:44.174 [2024-11-27 14:19:21.351438] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:19:44.174 [2024-11-27 14:19:21.351507] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:19:44.174 [2024-11-27 14:19:21.351618] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:19:44.174 [2024-11-27 14:19:21.351633] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:44.174 [2024-11-27 14:19:21.351972] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:44.174 [2024-11-27 14:19:21.352160] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:19:44.174 [2024-11-27 14:19:21.352182] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:19:44.174 pt1 00:19:44.174 [2024-11-27 14:19:21.352492] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:44.174 "name": "raid_bdev1", 00:19:44.174 "uuid": "a201d1c5-79ca-4bf1-9c3a-529fb0f1c7b0", 00:19:44.174 "strip_size_kb": 0, 00:19:44.174 "state": "online", 00:19:44.174 "raid_level": "raid1", 00:19:44.174 "superblock": true, 00:19:44.174 "num_base_bdevs": 2, 00:19:44.174 "num_base_bdevs_discovered": 1, 00:19:44.174 "num_base_bdevs_operational": 1, 00:19:44.174 "base_bdevs_list": [ 00:19:44.174 { 00:19:44.174 "name": null, 00:19:44.174 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:44.174 "is_configured": false, 00:19:44.174 "data_offset": 256, 00:19:44.174 "data_size": 7936 00:19:44.174 }, 00:19:44.174 { 00:19:44.174 "name": "pt2", 00:19:44.174 "uuid": "00000000-0000-0000-0000-000000000002", 00:19:44.174 "is_configured": true, 00:19:44.174 "data_offset": 256, 00:19:44.174 "data_size": 7936 00:19:44.174 } 00:19:44.174 ] 00:19:44.174 }' 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:44.174 14:19:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:44.743 14:19:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:19:44.743 14:19:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:19:44.743 14:19:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.743 14:19:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:44.743 14:19:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.743 14:19:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:19:44.743 14:19:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:44.743 14:19:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.743 14:19:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:44.743 14:19:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:19:44.743 [2024-11-27 14:19:21.948254] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:44.743 14:19:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.743 14:19:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@558 -- # '[' a201d1c5-79ca-4bf1-9c3a-529fb0f1c7b0 '!=' a201d1c5-79ca-4bf1-9c3a-529fb0f1c7b0 ']' 00:19:44.743 14:19:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@563 -- # killprocess 86502 00:19:44.743 14:19:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@954 -- # '[' -z 86502 ']' 00:19:44.743 14:19:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # kill -0 86502 00:19:44.743 14:19:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # uname 00:19:44.743 14:19:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:44.743 14:19:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86502 00:19:45.003 killing process with pid 86502 00:19:45.003 14:19:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:45.003 14:19:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:45.003 14:19:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86502' 00:19:45.003 14:19:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@973 -- # kill 86502 00:19:45.003 [2024-11-27 14:19:22.031417] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:19:45.003 14:19:22 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@978 -- # wait 86502 00:19:45.003 [2024-11-27 14:19:22.031528] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:45.003 [2024-11-27 14:19:22.031591] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:45.003 [2024-11-27 14:19:22.031613] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:19:45.003 [2024-11-27 14:19:22.220956] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:19:46.382 ************************************ 00:19:46.382 END TEST raid_superblock_test_4k 00:19:46.382 ************************************ 00:19:46.382 14:19:23 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@565 -- # return 0 00:19:46.382 00:19:46.382 real 0m6.802s 00:19:46.382 user 0m10.792s 00:19:46.382 sys 0m1.007s 00:19:46.382 14:19:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:46.382 14:19:23 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:19:46.382 14:19:23 bdev_raid -- bdev/bdev_raid.sh@999 -- # '[' true = true ']' 00:19:46.382 14:19:23 bdev_raid -- bdev/bdev_raid.sh@1000 -- # run_test raid_rebuild_test_sb_4k raid_rebuild_test raid1 2 true false true 00:19:46.382 14:19:23 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:19:46.382 14:19:23 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:46.382 14:19:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:19:46.382 ************************************ 00:19:46.382 START TEST raid_rebuild_test_sb_4k 00:19:46.382 ************************************ 00:19:46.382 14:19:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:19:46.382 14:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:19:46.382 14:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:19:46.382 14:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:19:46.382 14:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:19:46.382 14:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@573 -- # local verify=true 00:19:46.382 14:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:19:46.382 14:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:46.382 14:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:19:46.382 14:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:46.382 14:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:46.382 14:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:19:46.382 14:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:19:46.382 14:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:19:46.382 14:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:19:46.382 14:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:19:46.382 14:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:19:46.382 14:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@576 -- # local strip_size 00:19:46.382 14:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@577 -- # local create_arg 00:19:46.382 14:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:19:46.382 14:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@579 -- # local data_offset 00:19:46.382 14:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:19:46.382 14:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:19:46.382 14:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:19:46.382 14:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:19:46.382 14:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@597 -- # raid_pid=86836 00:19:46.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.382 14:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@598 -- # waitforlisten 86836 00:19:46.382 14:19:23 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:19:46.382 14:19:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@835 -- # '[' -z 86836 ']' 00:19:46.382 14:19:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.382 14:19:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:46.382 14:19:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.382 14:19:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:46.382 14:19:23 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:46.382 I/O size of 3145728 is greater than zero copy threshold (65536). 00:19:46.382 Zero copy mechanism will not be used. 00:19:46.382 [2024-11-27 14:19:23.430206] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:19:46.382 [2024-11-27 14:19:23.430397] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86836 ] 00:19:46.382 [2024-11-27 14:19:23.609887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:46.642 [2024-11-27 14:19:23.737261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.900 [2024-11-27 14:19:23.935535] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:46.900 [2024-11-27 14:19:23.935918] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:19:47.159 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:47.159 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@868 -- # return 0 00:19:47.159 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:47.159 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev1_malloc 00:19:47.159 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.159 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:47.159 BaseBdev1_malloc 00:19:47.159 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.159 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:19:47.159 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.159 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:47.159 [2024-11-27 14:19:24.417836] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:19:47.159 [2024-11-27 14:19:24.417963] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:47.159 [2024-11-27 14:19:24.417996] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:19:47.159 [2024-11-27 14:19:24.418015] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:47.159 [2024-11-27 14:19:24.421030] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:47.159 [2024-11-27 14:19:24.421096] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:19:47.159 BaseBdev1 00:19:47.159 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.159 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:19:47.160 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -b BaseBdev2_malloc 00:19:47.160 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.160 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:47.419 BaseBdev2_malloc 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:47.419 [2024-11-27 14:19:24.474844] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:19:47.419 [2024-11-27 14:19:24.474934] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:47.419 [2024-11-27 14:19:24.474972] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:19:47.419 [2024-11-27 14:19:24.474990] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:47.419 [2024-11-27 14:19:24.478086] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:47.419 [2024-11-27 14:19:24.478348] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:19:47.419 BaseBdev2 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -b spare_malloc 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:47.419 spare_malloc 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:47.419 spare_delay 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:47.419 [2024-11-27 14:19:24.547174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:47.419 [2024-11-27 14:19:24.547248] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:47.419 [2024-11-27 14:19:24.547280] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:19:47.419 [2024-11-27 14:19:24.547298] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:47.419 [2024-11-27 14:19:24.550164] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:47.419 [2024-11-27 14:19:24.550259] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:47.419 spare 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:47.419 [2024-11-27 14:19:24.555291] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:19:47.419 [2024-11-27 14:19:24.557695] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:47.419 [2024-11-27 14:19:24.557981] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:19:47.419 [2024-11-27 14:19:24.558006] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:47.419 [2024-11-27 14:19:24.558331] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:19:47.419 [2024-11-27 14:19:24.558573] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:19:47.419 [2024-11-27 14:19:24.558588] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:19:47.419 [2024-11-27 14:19:24.558952] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:47.419 "name": "raid_bdev1", 00:19:47.419 "uuid": "b2b6a42d-f2b6-4d3a-af74-36641b88bb9f", 00:19:47.419 "strip_size_kb": 0, 00:19:47.419 "state": "online", 00:19:47.419 "raid_level": "raid1", 00:19:47.419 "superblock": true, 00:19:47.419 "num_base_bdevs": 2, 00:19:47.419 "num_base_bdevs_discovered": 2, 00:19:47.419 "num_base_bdevs_operational": 2, 00:19:47.419 "base_bdevs_list": [ 00:19:47.419 { 00:19:47.419 "name": "BaseBdev1", 00:19:47.419 "uuid": "f3eb8f19-0d33-5872-ad53-aa20dd82cf1c", 00:19:47.419 "is_configured": true, 00:19:47.419 "data_offset": 256, 00:19:47.419 "data_size": 7936 00:19:47.419 }, 00:19:47.419 { 00:19:47.419 "name": "BaseBdev2", 00:19:47.419 "uuid": "c4881d4e-a86f-53a7-91e1-d4fdea5f05a7", 00:19:47.419 "is_configured": true, 00:19:47.419 "data_offset": 256, 00:19:47.419 "data_size": 7936 00:19:47.419 } 00:19:47.419 ] 00:19:47.419 }' 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:47.419 14:19:24 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:47.986 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:19:47.986 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.986 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:47.986 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:19:47.986 [2024-11-27 14:19:25.091830] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:19:47.986 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.986 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:19:47.986 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:47.986 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.986 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:47.986 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:19:47.986 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.986 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:19:47.986 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:19:47.986 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:19:47.986 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:19:47.986 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:19:47.986 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:47.986 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:19:47.987 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:47.987 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:19:47.987 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:47.987 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:19:47.987 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:47.987 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:47.987 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:19:48.264 [2024-11-27 14:19:25.499674] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:19:48.264 /dev/nbd0 00:19:48.264 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:48.522 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:48.522 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:48.522 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:19:48.522 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:48.522 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:48.522 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:48.522 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:19:48.522 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:48.522 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:48.522 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:48.522 1+0 records in 00:19:48.522 1+0 records out 00:19:48.522 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000556961 s, 7.4 MB/s 00:19:48.522 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:48.522 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:19:48.522 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:48.522 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:48.522 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:19:48.522 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:48.522 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:19:48.522 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:19:48.522 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:19:48.522 14:19:25 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:19:49.458 7936+0 records in 00:19:49.458 7936+0 records out 00:19:49.458 32505856 bytes (33 MB, 31 MiB) copied, 0.912337 s, 35.6 MB/s 00:19:49.458 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:19:49.458 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:49.458 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:19:49.458 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:49.458 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:19:49.458 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:49.458 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:49.717 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:49.717 [2024-11-27 14:19:26.784245] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:49.717 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:49.717 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:49.717 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:49.717 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:49.717 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:49.717 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:49.717 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:49.717 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:19:49.717 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.717 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:49.717 [2024-11-27 14:19:26.796388] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:19:49.717 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.717 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:49.717 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:49.717 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:49.717 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:49.717 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:49.717 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:49.717 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:49.717 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:49.717 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:49.717 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:49.717 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:49.717 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.717 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:49.717 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:49.717 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.717 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:49.717 "name": "raid_bdev1", 00:19:49.717 "uuid": "b2b6a42d-f2b6-4d3a-af74-36641b88bb9f", 00:19:49.717 "strip_size_kb": 0, 00:19:49.717 "state": "online", 00:19:49.717 "raid_level": "raid1", 00:19:49.717 "superblock": true, 00:19:49.717 "num_base_bdevs": 2, 00:19:49.717 "num_base_bdevs_discovered": 1, 00:19:49.717 "num_base_bdevs_operational": 1, 00:19:49.717 "base_bdevs_list": [ 00:19:49.717 { 00:19:49.717 "name": null, 00:19:49.717 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:49.717 "is_configured": false, 00:19:49.717 "data_offset": 0, 00:19:49.717 "data_size": 7936 00:19:49.717 }, 00:19:49.717 { 00:19:49.717 "name": "BaseBdev2", 00:19:49.717 "uuid": "c4881d4e-a86f-53a7-91e1-d4fdea5f05a7", 00:19:49.717 "is_configured": true, 00:19:49.717 "data_offset": 256, 00:19:49.717 "data_size": 7936 00:19:49.717 } 00:19:49.717 ] 00:19:49.717 }' 00:19:49.717 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:49.717 14:19:26 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:50.284 14:19:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:50.285 14:19:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.285 14:19:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:50.285 [2024-11-27 14:19:27.296581] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:50.285 [2024-11-27 14:19:27.313343] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:19:50.285 14:19:27 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.285 14:19:27 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@647 -- # sleep 1 00:19:50.285 [2024-11-27 14:19:27.315988] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:51.216 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:51.216 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:51.216 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:51.216 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:51.217 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:51.217 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.217 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.217 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.217 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:51.217 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.217 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:51.217 "name": "raid_bdev1", 00:19:51.217 "uuid": "b2b6a42d-f2b6-4d3a-af74-36641b88bb9f", 00:19:51.217 "strip_size_kb": 0, 00:19:51.217 "state": "online", 00:19:51.217 "raid_level": "raid1", 00:19:51.217 "superblock": true, 00:19:51.217 "num_base_bdevs": 2, 00:19:51.217 "num_base_bdevs_discovered": 2, 00:19:51.217 "num_base_bdevs_operational": 2, 00:19:51.217 "process": { 00:19:51.217 "type": "rebuild", 00:19:51.217 "target": "spare", 00:19:51.217 "progress": { 00:19:51.217 "blocks": 2560, 00:19:51.217 "percent": 32 00:19:51.217 } 00:19:51.217 }, 00:19:51.217 "base_bdevs_list": [ 00:19:51.217 { 00:19:51.217 "name": "spare", 00:19:51.217 "uuid": "d60231ee-887f-551d-9b73-a478da0273aa", 00:19:51.217 "is_configured": true, 00:19:51.217 "data_offset": 256, 00:19:51.217 "data_size": 7936 00:19:51.217 }, 00:19:51.217 { 00:19:51.217 "name": "BaseBdev2", 00:19:51.217 "uuid": "c4881d4e-a86f-53a7-91e1-d4fdea5f05a7", 00:19:51.217 "is_configured": true, 00:19:51.217 "data_offset": 256, 00:19:51.217 "data_size": 7936 00:19:51.217 } 00:19:51.217 ] 00:19:51.217 }' 00:19:51.217 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:51.217 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:51.217 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:51.217 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:51.217 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:51.217 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.217 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:51.217 [2024-11-27 14:19:28.489487] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:51.474 [2024-11-27 14:19:28.525471] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:19:51.474 [2024-11-27 14:19:28.525587] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:51.474 [2024-11-27 14:19:28.525610] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:51.474 [2024-11-27 14:19:28.525625] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:19:51.474 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.474 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:51.474 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:51.474 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:51.474 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:51.474 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:51.474 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:51.474 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:51.474 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:51.474 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:51.474 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:51.474 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:51.474 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:51.474 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.474 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:51.474 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.474 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:51.474 "name": "raid_bdev1", 00:19:51.474 "uuid": "b2b6a42d-f2b6-4d3a-af74-36641b88bb9f", 00:19:51.474 "strip_size_kb": 0, 00:19:51.474 "state": "online", 00:19:51.474 "raid_level": "raid1", 00:19:51.474 "superblock": true, 00:19:51.474 "num_base_bdevs": 2, 00:19:51.474 "num_base_bdevs_discovered": 1, 00:19:51.474 "num_base_bdevs_operational": 1, 00:19:51.474 "base_bdevs_list": [ 00:19:51.474 { 00:19:51.474 "name": null, 00:19:51.474 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:51.474 "is_configured": false, 00:19:51.474 "data_offset": 0, 00:19:51.474 "data_size": 7936 00:19:51.474 }, 00:19:51.474 { 00:19:51.474 "name": "BaseBdev2", 00:19:51.474 "uuid": "c4881d4e-a86f-53a7-91e1-d4fdea5f05a7", 00:19:51.474 "is_configured": true, 00:19:51.474 "data_offset": 256, 00:19:51.474 "data_size": 7936 00:19:51.474 } 00:19:51.474 ] 00:19:51.474 }' 00:19:51.474 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:51.474 14:19:28 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:52.039 14:19:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:52.039 14:19:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:52.039 14:19:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:52.039 14:19:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:52.039 14:19:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:52.039 14:19:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:52.039 14:19:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:52.039 14:19:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.039 14:19:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:52.039 14:19:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.039 14:19:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:52.039 "name": "raid_bdev1", 00:19:52.039 "uuid": "b2b6a42d-f2b6-4d3a-af74-36641b88bb9f", 00:19:52.039 "strip_size_kb": 0, 00:19:52.039 "state": "online", 00:19:52.039 "raid_level": "raid1", 00:19:52.039 "superblock": true, 00:19:52.039 "num_base_bdevs": 2, 00:19:52.039 "num_base_bdevs_discovered": 1, 00:19:52.039 "num_base_bdevs_operational": 1, 00:19:52.039 "base_bdevs_list": [ 00:19:52.039 { 00:19:52.039 "name": null, 00:19:52.039 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:52.039 "is_configured": false, 00:19:52.039 "data_offset": 0, 00:19:52.039 "data_size": 7936 00:19:52.039 }, 00:19:52.039 { 00:19:52.039 "name": "BaseBdev2", 00:19:52.039 "uuid": "c4881d4e-a86f-53a7-91e1-d4fdea5f05a7", 00:19:52.039 "is_configured": true, 00:19:52.039 "data_offset": 256, 00:19:52.039 "data_size": 7936 00:19:52.039 } 00:19:52.039 ] 00:19:52.039 }' 00:19:52.039 14:19:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:52.039 14:19:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:52.039 14:19:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:52.039 14:19:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:52.039 14:19:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:52.040 14:19:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.040 14:19:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:52.040 [2024-11-27 14:19:29.274208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:52.040 [2024-11-27 14:19:29.290458] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:19:52.040 14:19:29 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.040 14:19:29 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@663 -- # sleep 1 00:19:52.040 [2024-11-27 14:19:29.292976] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:19:53.415 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:53.416 "name": "raid_bdev1", 00:19:53.416 "uuid": "b2b6a42d-f2b6-4d3a-af74-36641b88bb9f", 00:19:53.416 "strip_size_kb": 0, 00:19:53.416 "state": "online", 00:19:53.416 "raid_level": "raid1", 00:19:53.416 "superblock": true, 00:19:53.416 "num_base_bdevs": 2, 00:19:53.416 "num_base_bdevs_discovered": 2, 00:19:53.416 "num_base_bdevs_operational": 2, 00:19:53.416 "process": { 00:19:53.416 "type": "rebuild", 00:19:53.416 "target": "spare", 00:19:53.416 "progress": { 00:19:53.416 "blocks": 2560, 00:19:53.416 "percent": 32 00:19:53.416 } 00:19:53.416 }, 00:19:53.416 "base_bdevs_list": [ 00:19:53.416 { 00:19:53.416 "name": "spare", 00:19:53.416 "uuid": "d60231ee-887f-551d-9b73-a478da0273aa", 00:19:53.416 "is_configured": true, 00:19:53.416 "data_offset": 256, 00:19:53.416 "data_size": 7936 00:19:53.416 }, 00:19:53.416 { 00:19:53.416 "name": "BaseBdev2", 00:19:53.416 "uuid": "c4881d4e-a86f-53a7-91e1-d4fdea5f05a7", 00:19:53.416 "is_configured": true, 00:19:53.416 "data_offset": 256, 00:19:53.416 "data_size": 7936 00:19:53.416 } 00:19:53.416 ] 00:19:53.416 }' 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:19:53.416 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@706 -- # local timeout=737 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:53.416 "name": "raid_bdev1", 00:19:53.416 "uuid": "b2b6a42d-f2b6-4d3a-af74-36641b88bb9f", 00:19:53.416 "strip_size_kb": 0, 00:19:53.416 "state": "online", 00:19:53.416 "raid_level": "raid1", 00:19:53.416 "superblock": true, 00:19:53.416 "num_base_bdevs": 2, 00:19:53.416 "num_base_bdevs_discovered": 2, 00:19:53.416 "num_base_bdevs_operational": 2, 00:19:53.416 "process": { 00:19:53.416 "type": "rebuild", 00:19:53.416 "target": "spare", 00:19:53.416 "progress": { 00:19:53.416 "blocks": 2816, 00:19:53.416 "percent": 35 00:19:53.416 } 00:19:53.416 }, 00:19:53.416 "base_bdevs_list": [ 00:19:53.416 { 00:19:53.416 "name": "spare", 00:19:53.416 "uuid": "d60231ee-887f-551d-9b73-a478da0273aa", 00:19:53.416 "is_configured": true, 00:19:53.416 "data_offset": 256, 00:19:53.416 "data_size": 7936 00:19:53.416 }, 00:19:53.416 { 00:19:53.416 "name": "BaseBdev2", 00:19:53.416 "uuid": "c4881d4e-a86f-53a7-91e1-d4fdea5f05a7", 00:19:53.416 "is_configured": true, 00:19:53.416 "data_offset": 256, 00:19:53.416 "data_size": 7936 00:19:53.416 } 00:19:53.416 ] 00:19:53.416 }' 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:53.416 14:19:30 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:54.799 14:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:54.799 14:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:54.799 14:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:54.799 14:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:54.799 14:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:54.799 14:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:54.799 14:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:54.799 14:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:54.799 14:19:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.799 14:19:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:54.799 14:19:31 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.799 14:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:54.799 "name": "raid_bdev1", 00:19:54.799 "uuid": "b2b6a42d-f2b6-4d3a-af74-36641b88bb9f", 00:19:54.799 "strip_size_kb": 0, 00:19:54.799 "state": "online", 00:19:54.799 "raid_level": "raid1", 00:19:54.799 "superblock": true, 00:19:54.799 "num_base_bdevs": 2, 00:19:54.799 "num_base_bdevs_discovered": 2, 00:19:54.799 "num_base_bdevs_operational": 2, 00:19:54.799 "process": { 00:19:54.799 "type": "rebuild", 00:19:54.799 "target": "spare", 00:19:54.799 "progress": { 00:19:54.799 "blocks": 5888, 00:19:54.799 "percent": 74 00:19:54.799 } 00:19:54.799 }, 00:19:54.799 "base_bdevs_list": [ 00:19:54.799 { 00:19:54.799 "name": "spare", 00:19:54.799 "uuid": "d60231ee-887f-551d-9b73-a478da0273aa", 00:19:54.799 "is_configured": true, 00:19:54.799 "data_offset": 256, 00:19:54.799 "data_size": 7936 00:19:54.799 }, 00:19:54.799 { 00:19:54.799 "name": "BaseBdev2", 00:19:54.799 "uuid": "c4881d4e-a86f-53a7-91e1-d4fdea5f05a7", 00:19:54.799 "is_configured": true, 00:19:54.799 "data_offset": 256, 00:19:54.799 "data_size": 7936 00:19:54.799 } 00:19:54.799 ] 00:19:54.799 }' 00:19:54.799 14:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:54.799 14:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:19:54.799 14:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:54.799 14:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:19:54.799 14:19:31 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@711 -- # sleep 1 00:19:55.383 [2024-11-27 14:19:32.416271] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:19:55.383 [2024-11-27 14:19:32.416398] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:19:55.383 [2024-11-27 14:19:32.416586] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:55.642 14:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:19:55.642 14:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:19:55.642 14:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:55.642 14:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:19:55.642 14:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:19:55.642 14:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:55.642 14:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.642 14:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.642 14:19:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.642 14:19:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:55.642 14:19:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.642 14:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:55.642 "name": "raid_bdev1", 00:19:55.642 "uuid": "b2b6a42d-f2b6-4d3a-af74-36641b88bb9f", 00:19:55.642 "strip_size_kb": 0, 00:19:55.642 "state": "online", 00:19:55.642 "raid_level": "raid1", 00:19:55.642 "superblock": true, 00:19:55.643 "num_base_bdevs": 2, 00:19:55.643 "num_base_bdevs_discovered": 2, 00:19:55.643 "num_base_bdevs_operational": 2, 00:19:55.643 "base_bdevs_list": [ 00:19:55.643 { 00:19:55.643 "name": "spare", 00:19:55.643 "uuid": "d60231ee-887f-551d-9b73-a478da0273aa", 00:19:55.643 "is_configured": true, 00:19:55.643 "data_offset": 256, 00:19:55.643 "data_size": 7936 00:19:55.643 }, 00:19:55.643 { 00:19:55.643 "name": "BaseBdev2", 00:19:55.643 "uuid": "c4881d4e-a86f-53a7-91e1-d4fdea5f05a7", 00:19:55.643 "is_configured": true, 00:19:55.643 "data_offset": 256, 00:19:55.643 "data_size": 7936 00:19:55.643 } 00:19:55.643 ] 00:19:55.643 }' 00:19:55.643 14:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:55.901 14:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:19:55.901 14:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:55.901 14:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:19:55.901 14:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@709 -- # break 00:19:55.901 14:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:55.901 14:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:55.901 14:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:55.901 14:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:55.901 14:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:55.901 14:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.901 14:19:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.901 14:19:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:55.901 14:19:32 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.901 14:19:32 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.901 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:55.901 "name": "raid_bdev1", 00:19:55.901 "uuid": "b2b6a42d-f2b6-4d3a-af74-36641b88bb9f", 00:19:55.901 "strip_size_kb": 0, 00:19:55.901 "state": "online", 00:19:55.901 "raid_level": "raid1", 00:19:55.901 "superblock": true, 00:19:55.901 "num_base_bdevs": 2, 00:19:55.901 "num_base_bdevs_discovered": 2, 00:19:55.901 "num_base_bdevs_operational": 2, 00:19:55.901 "base_bdevs_list": [ 00:19:55.901 { 00:19:55.901 "name": "spare", 00:19:55.901 "uuid": "d60231ee-887f-551d-9b73-a478da0273aa", 00:19:55.901 "is_configured": true, 00:19:55.901 "data_offset": 256, 00:19:55.901 "data_size": 7936 00:19:55.901 }, 00:19:55.901 { 00:19:55.901 "name": "BaseBdev2", 00:19:55.901 "uuid": "c4881d4e-a86f-53a7-91e1-d4fdea5f05a7", 00:19:55.901 "is_configured": true, 00:19:55.901 "data_offset": 256, 00:19:55.901 "data_size": 7936 00:19:55.901 } 00:19:55.901 ] 00:19:55.901 }' 00:19:55.901 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:55.901 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:55.901 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:55.901 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:55.901 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:55.901 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:55.901 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:55.901 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:55.901 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:55.901 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:55.901 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:55.901 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:55.901 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:55.901 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:55.901 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:55.901 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:55.901 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.901 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:55.901 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.160 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:56.160 "name": "raid_bdev1", 00:19:56.160 "uuid": "b2b6a42d-f2b6-4d3a-af74-36641b88bb9f", 00:19:56.160 "strip_size_kb": 0, 00:19:56.160 "state": "online", 00:19:56.160 "raid_level": "raid1", 00:19:56.160 "superblock": true, 00:19:56.160 "num_base_bdevs": 2, 00:19:56.160 "num_base_bdevs_discovered": 2, 00:19:56.160 "num_base_bdevs_operational": 2, 00:19:56.160 "base_bdevs_list": [ 00:19:56.160 { 00:19:56.160 "name": "spare", 00:19:56.160 "uuid": "d60231ee-887f-551d-9b73-a478da0273aa", 00:19:56.160 "is_configured": true, 00:19:56.160 "data_offset": 256, 00:19:56.160 "data_size": 7936 00:19:56.160 }, 00:19:56.160 { 00:19:56.160 "name": "BaseBdev2", 00:19:56.160 "uuid": "c4881d4e-a86f-53a7-91e1-d4fdea5f05a7", 00:19:56.160 "is_configured": true, 00:19:56.160 "data_offset": 256, 00:19:56.160 "data_size": 7936 00:19:56.160 } 00:19:56.160 ] 00:19:56.160 }' 00:19:56.160 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:56.160 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:56.418 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:19:56.418 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.418 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:56.418 [2024-11-27 14:19:33.669045] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:19:56.418 [2024-11-27 14:19:33.669215] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:19:56.418 [2024-11-27 14:19:33.669421] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:19:56.418 [2024-11-27 14:19:33.669526] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:19:56.418 [2024-11-27 14:19:33.669548] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:19:56.418 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.418 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:56.418 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.418 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:56.418 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # jq length 00:19:56.418 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.677 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:19:56.677 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:19:56.677 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:19:56.678 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:19:56.678 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:19:56.678 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:19:56.678 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@10 -- # local bdev_list 00:19:56.678 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:56.678 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@11 -- # local nbd_list 00:19:56.678 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@12 -- # local i 00:19:56.678 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:19:56.678 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:56.678 14:19:33 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:19:56.936 /dev/nbd0 00:19:56.937 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:19:56.937 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:19:56.937 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:19:56.937 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:19:56.937 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:56.937 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:56.937 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:19:56.937 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:19:56.937 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:56.937 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:56.937 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:56.937 1+0 records in 00:19:56.937 1+0 records out 00:19:56.937 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000419565 s, 9.8 MB/s 00:19:56.937 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.937 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:19:56.937 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:56.937 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:56.937 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:19:56.937 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:56.937 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:56.937 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:19:57.196 /dev/nbd1 00:19:57.196 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:19:57.196 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:19:57.196 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:19:57.196 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@873 -- # local i 00:19:57.196 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:19:57.196 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:19:57.196 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:19:57.196 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@877 -- # break 00:19:57.196 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:19:57.196 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:19:57.196 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:19:57.196 1+0 records in 00:19:57.196 1+0 records out 00:19:57.196 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000371461 s, 11.0 MB/s 00:19:57.196 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:57.196 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@890 -- # size=4096 00:19:57.196 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:19:57.196 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:19:57.196 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@893 -- # return 0 00:19:57.196 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:19:57.196 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:19:57.196 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:19:57.455 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:19:57.455 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:19:57.455 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:19:57.455 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@50 -- # local nbd_list 00:19:57.455 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@51 -- # local i 00:19:57.455 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:57.455 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:19:57.714 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:19:57.714 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:19:57.714 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:19:57.714 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:57.714 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:57.714 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:19:57.714 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:57.714 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:57.714 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:19:57.714 14:19:34 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:19:57.973 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:19:57.973 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:19:57.973 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:19:57.973 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:19:57.973 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:19:57.973 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:19:57.973 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@41 -- # break 00:19:57.973 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/nbd_common.sh@45 -- # return 0 00:19:57.973 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:19:57.973 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:19:57.973 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.973 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.973 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.973 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:19:57.973 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.973 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:57.973 [2024-11-27 14:19:35.229938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:19:57.973 [2024-11-27 14:19:35.230001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:19:57.973 [2024-11-27 14:19:35.230043] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:19:57.973 [2024-11-27 14:19:35.230059] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:19:57.973 [2024-11-27 14:19:35.233127] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:19:57.973 [2024-11-27 14:19:35.233172] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:19:57.973 [2024-11-27 14:19:35.233288] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:19:57.973 [2024-11-27 14:19:35.233358] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:57.973 [2024-11-27 14:19:35.233543] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:19:57.973 spare 00:19:57.973 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.973 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:19:57.973 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.973 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.232 [2024-11-27 14:19:35.333706] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:19:58.232 [2024-11-27 14:19:35.333768] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:19:58.232 [2024-11-27 14:19:35.334203] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:19:58.232 [2024-11-27 14:19:35.334510] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:19:58.232 [2024-11-27 14:19:35.334561] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:19:58.232 [2024-11-27 14:19:35.334856] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:19:58.232 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.233 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:19:58.233 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:58.233 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:58.233 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:58.233 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:58.233 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:19:58.233 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:58.233 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:58.233 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:58.233 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:58.233 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.233 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.233 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.233 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.233 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.233 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:58.233 "name": "raid_bdev1", 00:19:58.233 "uuid": "b2b6a42d-f2b6-4d3a-af74-36641b88bb9f", 00:19:58.233 "strip_size_kb": 0, 00:19:58.233 "state": "online", 00:19:58.233 "raid_level": "raid1", 00:19:58.233 "superblock": true, 00:19:58.233 "num_base_bdevs": 2, 00:19:58.233 "num_base_bdevs_discovered": 2, 00:19:58.233 "num_base_bdevs_operational": 2, 00:19:58.233 "base_bdevs_list": [ 00:19:58.233 { 00:19:58.233 "name": "spare", 00:19:58.233 "uuid": "d60231ee-887f-551d-9b73-a478da0273aa", 00:19:58.233 "is_configured": true, 00:19:58.233 "data_offset": 256, 00:19:58.233 "data_size": 7936 00:19:58.233 }, 00:19:58.233 { 00:19:58.233 "name": "BaseBdev2", 00:19:58.233 "uuid": "c4881d4e-a86f-53a7-91e1-d4fdea5f05a7", 00:19:58.233 "is_configured": true, 00:19:58.233 "data_offset": 256, 00:19:58.233 "data_size": 7936 00:19:58.233 } 00:19:58.233 ] 00:19:58.233 }' 00:19:58.233 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:58.233 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.800 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:19:58.800 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:19:58.800 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:19:58.800 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:19:58.800 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:19:58.800 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.800 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:58.800 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.800 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.800 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.800 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:19:58.800 "name": "raid_bdev1", 00:19:58.800 "uuid": "b2b6a42d-f2b6-4d3a-af74-36641b88bb9f", 00:19:58.800 "strip_size_kb": 0, 00:19:58.800 "state": "online", 00:19:58.800 "raid_level": "raid1", 00:19:58.800 "superblock": true, 00:19:58.800 "num_base_bdevs": 2, 00:19:58.800 "num_base_bdevs_discovered": 2, 00:19:58.800 "num_base_bdevs_operational": 2, 00:19:58.800 "base_bdevs_list": [ 00:19:58.800 { 00:19:58.800 "name": "spare", 00:19:58.800 "uuid": "d60231ee-887f-551d-9b73-a478da0273aa", 00:19:58.800 "is_configured": true, 00:19:58.800 "data_offset": 256, 00:19:58.800 "data_size": 7936 00:19:58.800 }, 00:19:58.800 { 00:19:58.800 "name": "BaseBdev2", 00:19:58.800 "uuid": "c4881d4e-a86f-53a7-91e1-d4fdea5f05a7", 00:19:58.800 "is_configured": true, 00:19:58.800 "data_offset": 256, 00:19:58.800 "data_size": 7936 00:19:58.800 } 00:19:58.800 ] 00:19:58.800 }' 00:19:58.800 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:19:58.800 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:19:58.800 14:19:35 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:19:58.800 14:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:19:58.800 14:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:58.800 14:19:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.800 14:19:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:58.800 14:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:19:58.800 14:19:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.059 14:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:19:59.059 14:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:19:59.059 14:19:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.059 14:19:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:59.059 [2024-11-27 14:19:36.095105] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:19:59.059 14:19:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.059 14:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:19:59.059 14:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:19:59.059 14:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:19:59.059 14:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:19:59.059 14:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:19:59.059 14:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:19:59.059 14:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:19:59.059 14:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:19:59.059 14:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:19:59.059 14:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:19:59.059 14:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:19:59.059 14:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:19:59.059 14:19:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.059 14:19:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:59.060 14:19:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.060 14:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:19:59.060 "name": "raid_bdev1", 00:19:59.060 "uuid": "b2b6a42d-f2b6-4d3a-af74-36641b88bb9f", 00:19:59.060 "strip_size_kb": 0, 00:19:59.060 "state": "online", 00:19:59.060 "raid_level": "raid1", 00:19:59.060 "superblock": true, 00:19:59.060 "num_base_bdevs": 2, 00:19:59.060 "num_base_bdevs_discovered": 1, 00:19:59.060 "num_base_bdevs_operational": 1, 00:19:59.060 "base_bdevs_list": [ 00:19:59.060 { 00:19:59.060 "name": null, 00:19:59.060 "uuid": "00000000-0000-0000-0000-000000000000", 00:19:59.060 "is_configured": false, 00:19:59.060 "data_offset": 0, 00:19:59.060 "data_size": 7936 00:19:59.060 }, 00:19:59.060 { 00:19:59.060 "name": "BaseBdev2", 00:19:59.060 "uuid": "c4881d4e-a86f-53a7-91e1-d4fdea5f05a7", 00:19:59.060 "is_configured": true, 00:19:59.060 "data_offset": 256, 00:19:59.060 "data_size": 7936 00:19:59.060 } 00:19:59.060 ] 00:19:59.060 }' 00:19:59.060 14:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:19:59.060 14:19:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:59.627 14:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:19:59.627 14:19:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.627 14:19:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:19:59.627 [2024-11-27 14:19:36.615371] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:59.627 [2024-11-27 14:19:36.615839] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:19:59.627 [2024-11-27 14:19:36.615993] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:19:59.627 [2024-11-27 14:19:36.616057] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:19:59.627 [2024-11-27 14:19:36.632284] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:19:59.627 14:19:36 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.627 14:19:36 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@757 -- # sleep 1 00:19:59.627 [2024-11-27 14:19:36.635211] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:00.563 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:00.563 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:00.563 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:00.563 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:00.563 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:00.563 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.563 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.563 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:00.563 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.563 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.563 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:00.563 "name": "raid_bdev1", 00:20:00.563 "uuid": "b2b6a42d-f2b6-4d3a-af74-36641b88bb9f", 00:20:00.563 "strip_size_kb": 0, 00:20:00.563 "state": "online", 00:20:00.563 "raid_level": "raid1", 00:20:00.563 "superblock": true, 00:20:00.563 "num_base_bdevs": 2, 00:20:00.563 "num_base_bdevs_discovered": 2, 00:20:00.563 "num_base_bdevs_operational": 2, 00:20:00.563 "process": { 00:20:00.563 "type": "rebuild", 00:20:00.563 "target": "spare", 00:20:00.563 "progress": { 00:20:00.563 "blocks": 2560, 00:20:00.563 "percent": 32 00:20:00.563 } 00:20:00.563 }, 00:20:00.563 "base_bdevs_list": [ 00:20:00.563 { 00:20:00.563 "name": "spare", 00:20:00.563 "uuid": "d60231ee-887f-551d-9b73-a478da0273aa", 00:20:00.563 "is_configured": true, 00:20:00.563 "data_offset": 256, 00:20:00.563 "data_size": 7936 00:20:00.563 }, 00:20:00.563 { 00:20:00.563 "name": "BaseBdev2", 00:20:00.563 "uuid": "c4881d4e-a86f-53a7-91e1-d4fdea5f05a7", 00:20:00.563 "is_configured": true, 00:20:00.563 "data_offset": 256, 00:20:00.563 "data_size": 7936 00:20:00.563 } 00:20:00.563 ] 00:20:00.563 }' 00:20:00.563 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:00.563 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:00.563 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:00.563 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:00.563 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:00.563 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.563 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:00.563 [2024-11-27 14:19:37.808731] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:00.823 [2024-11-27 14:19:37.844672] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:00.823 [2024-11-27 14:19:37.844766] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:00.823 [2024-11-27 14:19:37.844803] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:00.823 [2024-11-27 14:19:37.844836] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:00.823 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.823 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:00.823 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:00.823 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:00.823 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:00.823 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:00.823 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:00.823 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:00.823 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:00.823 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:00.823 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:00.823 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:00.823 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:00.823 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.823 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:00.823 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.823 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:00.823 "name": "raid_bdev1", 00:20:00.823 "uuid": "b2b6a42d-f2b6-4d3a-af74-36641b88bb9f", 00:20:00.823 "strip_size_kb": 0, 00:20:00.823 "state": "online", 00:20:00.823 "raid_level": "raid1", 00:20:00.823 "superblock": true, 00:20:00.823 "num_base_bdevs": 2, 00:20:00.823 "num_base_bdevs_discovered": 1, 00:20:00.823 "num_base_bdevs_operational": 1, 00:20:00.823 "base_bdevs_list": [ 00:20:00.823 { 00:20:00.823 "name": null, 00:20:00.823 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:00.823 "is_configured": false, 00:20:00.823 "data_offset": 0, 00:20:00.823 "data_size": 7936 00:20:00.823 }, 00:20:00.823 { 00:20:00.823 "name": "BaseBdev2", 00:20:00.823 "uuid": "c4881d4e-a86f-53a7-91e1-d4fdea5f05a7", 00:20:00.823 "is_configured": true, 00:20:00.823 "data_offset": 256, 00:20:00.823 "data_size": 7936 00:20:00.823 } 00:20:00.823 ] 00:20:00.823 }' 00:20:00.823 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:00.823 14:19:37 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:01.391 14:19:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:01.391 14:19:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.391 14:19:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:01.391 [2024-11-27 14:19:38.387587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:01.391 [2024-11-27 14:19:38.387669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:01.391 [2024-11-27 14:19:38.387702] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:01.391 [2024-11-27 14:19:38.387719] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:01.391 [2024-11-27 14:19:38.388341] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:01.391 [2024-11-27 14:19:38.388390] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:01.391 [2024-11-27 14:19:38.388511] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:01.391 [2024-11-27 14:19:38.388536] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:01.391 [2024-11-27 14:19:38.388550] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:01.391 [2024-11-27 14:19:38.388602] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:01.391 [2024-11-27 14:19:38.404959] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:20:01.391 spare 00:20:01.391 14:19:38 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.391 14:19:38 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:01.391 [2024-11-27 14:19:38.407662] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:02.328 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:02.328 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:02.328 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:02.328 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:02.328 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:02.328 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.328 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.328 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.328 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.328 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.328 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:02.328 "name": "raid_bdev1", 00:20:02.328 "uuid": "b2b6a42d-f2b6-4d3a-af74-36641b88bb9f", 00:20:02.328 "strip_size_kb": 0, 00:20:02.328 "state": "online", 00:20:02.328 "raid_level": "raid1", 00:20:02.328 "superblock": true, 00:20:02.328 "num_base_bdevs": 2, 00:20:02.328 "num_base_bdevs_discovered": 2, 00:20:02.328 "num_base_bdevs_operational": 2, 00:20:02.328 "process": { 00:20:02.328 "type": "rebuild", 00:20:02.328 "target": "spare", 00:20:02.328 "progress": { 00:20:02.328 "blocks": 2560, 00:20:02.328 "percent": 32 00:20:02.328 } 00:20:02.328 }, 00:20:02.328 "base_bdevs_list": [ 00:20:02.328 { 00:20:02.328 "name": "spare", 00:20:02.328 "uuid": "d60231ee-887f-551d-9b73-a478da0273aa", 00:20:02.328 "is_configured": true, 00:20:02.328 "data_offset": 256, 00:20:02.328 "data_size": 7936 00:20:02.328 }, 00:20:02.328 { 00:20:02.328 "name": "BaseBdev2", 00:20:02.328 "uuid": "c4881d4e-a86f-53a7-91e1-d4fdea5f05a7", 00:20:02.328 "is_configured": true, 00:20:02.328 "data_offset": 256, 00:20:02.328 "data_size": 7936 00:20:02.328 } 00:20:02.328 ] 00:20:02.328 }' 00:20:02.328 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:02.328 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:02.328 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:02.328 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:02.328 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:02.328 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.328 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.328 [2024-11-27 14:19:39.581279] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:02.587 [2024-11-27 14:19:39.616466] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:02.587 [2024-11-27 14:19:39.616582] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:02.587 [2024-11-27 14:19:39.616609] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:02.587 [2024-11-27 14:19:39.616619] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:02.587 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.587 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:02.587 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:02.587 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:02.587 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:02.587 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:02.587 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:02.587 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:02.587 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:02.587 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:02.587 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:02.587 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:02.587 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:02.587 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.587 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:02.587 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.587 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:02.587 "name": "raid_bdev1", 00:20:02.587 "uuid": "b2b6a42d-f2b6-4d3a-af74-36641b88bb9f", 00:20:02.587 "strip_size_kb": 0, 00:20:02.587 "state": "online", 00:20:02.587 "raid_level": "raid1", 00:20:02.587 "superblock": true, 00:20:02.587 "num_base_bdevs": 2, 00:20:02.587 "num_base_bdevs_discovered": 1, 00:20:02.587 "num_base_bdevs_operational": 1, 00:20:02.587 "base_bdevs_list": [ 00:20:02.587 { 00:20:02.587 "name": null, 00:20:02.587 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:02.587 "is_configured": false, 00:20:02.587 "data_offset": 0, 00:20:02.587 "data_size": 7936 00:20:02.587 }, 00:20:02.587 { 00:20:02.587 "name": "BaseBdev2", 00:20:02.587 "uuid": "c4881d4e-a86f-53a7-91e1-d4fdea5f05a7", 00:20:02.587 "is_configured": true, 00:20:02.587 "data_offset": 256, 00:20:02.587 "data_size": 7936 00:20:02.587 } 00:20:02.587 ] 00:20:02.587 }' 00:20:02.588 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:02.588 14:19:39 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.156 14:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:03.156 14:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:03.156 14:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:03.156 14:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:03.156 14:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:03.156 14:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:03.156 14:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:03.156 14:19:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.156 14:19:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.156 14:19:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.156 14:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:03.156 "name": "raid_bdev1", 00:20:03.156 "uuid": "b2b6a42d-f2b6-4d3a-af74-36641b88bb9f", 00:20:03.156 "strip_size_kb": 0, 00:20:03.156 "state": "online", 00:20:03.156 "raid_level": "raid1", 00:20:03.156 "superblock": true, 00:20:03.156 "num_base_bdevs": 2, 00:20:03.156 "num_base_bdevs_discovered": 1, 00:20:03.156 "num_base_bdevs_operational": 1, 00:20:03.156 "base_bdevs_list": [ 00:20:03.156 { 00:20:03.156 "name": null, 00:20:03.156 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:03.156 "is_configured": false, 00:20:03.156 "data_offset": 0, 00:20:03.156 "data_size": 7936 00:20:03.156 }, 00:20:03.156 { 00:20:03.156 "name": "BaseBdev2", 00:20:03.156 "uuid": "c4881d4e-a86f-53a7-91e1-d4fdea5f05a7", 00:20:03.156 "is_configured": true, 00:20:03.156 "data_offset": 256, 00:20:03.156 "data_size": 7936 00:20:03.156 } 00:20:03.156 ] 00:20:03.156 }' 00:20:03.156 14:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:03.156 14:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:03.156 14:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:03.156 14:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:03.156 14:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:03.156 14:19:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.156 14:19:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.156 14:19:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.156 14:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:03.156 14:19:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.156 14:19:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:03.156 [2024-11-27 14:19:40.370469] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:03.156 [2024-11-27 14:19:40.370540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:03.156 [2024-11-27 14:19:40.370610] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:03.156 [2024-11-27 14:19:40.370664] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:03.156 [2024-11-27 14:19:40.371269] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:03.156 [2024-11-27 14:19:40.371311] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:03.156 [2024-11-27 14:19:40.371416] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:03.156 [2024-11-27 14:19:40.371438] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:03.156 [2024-11-27 14:19:40.371454] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:03.156 [2024-11-27 14:19:40.371467] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:03.156 BaseBdev1 00:20:03.156 14:19:40 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.156 14:19:40 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:04.535 14:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:04.535 14:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:04.535 14:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:04.535 14:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:04.535 14:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:04.535 14:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:04.535 14:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:04.535 14:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:04.535 14:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:04.535 14:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:04.536 14:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.536 14:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.536 14:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.536 14:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.536 14:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.536 14:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:04.536 "name": "raid_bdev1", 00:20:04.536 "uuid": "b2b6a42d-f2b6-4d3a-af74-36641b88bb9f", 00:20:04.536 "strip_size_kb": 0, 00:20:04.536 "state": "online", 00:20:04.536 "raid_level": "raid1", 00:20:04.536 "superblock": true, 00:20:04.536 "num_base_bdevs": 2, 00:20:04.536 "num_base_bdevs_discovered": 1, 00:20:04.536 "num_base_bdevs_operational": 1, 00:20:04.536 "base_bdevs_list": [ 00:20:04.536 { 00:20:04.536 "name": null, 00:20:04.536 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.536 "is_configured": false, 00:20:04.536 "data_offset": 0, 00:20:04.536 "data_size": 7936 00:20:04.536 }, 00:20:04.536 { 00:20:04.536 "name": "BaseBdev2", 00:20:04.536 "uuid": "c4881d4e-a86f-53a7-91e1-d4fdea5f05a7", 00:20:04.536 "is_configured": true, 00:20:04.536 "data_offset": 256, 00:20:04.536 "data_size": 7936 00:20:04.536 } 00:20:04.536 ] 00:20:04.536 }' 00:20:04.536 14:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:04.536 14:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.795 14:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:04.795 14:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:04.795 14:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:04.795 14:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:04.795 14:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:04.795 14:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:04.795 14:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:04.795 14:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.795 14:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.795 14:19:41 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.795 14:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:04.795 "name": "raid_bdev1", 00:20:04.795 "uuid": "b2b6a42d-f2b6-4d3a-af74-36641b88bb9f", 00:20:04.795 "strip_size_kb": 0, 00:20:04.795 "state": "online", 00:20:04.795 "raid_level": "raid1", 00:20:04.795 "superblock": true, 00:20:04.795 "num_base_bdevs": 2, 00:20:04.795 "num_base_bdevs_discovered": 1, 00:20:04.795 "num_base_bdevs_operational": 1, 00:20:04.795 "base_bdevs_list": [ 00:20:04.795 { 00:20:04.795 "name": null, 00:20:04.795 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:04.795 "is_configured": false, 00:20:04.795 "data_offset": 0, 00:20:04.795 "data_size": 7936 00:20:04.795 }, 00:20:04.795 { 00:20:04.795 "name": "BaseBdev2", 00:20:04.795 "uuid": "c4881d4e-a86f-53a7-91e1-d4fdea5f05a7", 00:20:04.795 "is_configured": true, 00:20:04.795 "data_offset": 256, 00:20:04.795 "data_size": 7936 00:20:04.795 } 00:20:04.795 ] 00:20:04.795 }' 00:20:04.795 14:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:04.795 14:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:04.795 14:19:41 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:04.795 14:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:04.795 14:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:04.795 14:19:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@652 -- # local es=0 00:20:04.795 14:19:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:04.795 14:19:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:04.795 14:19:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:04.795 14:19:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:04.795 14:19:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:04.795 14:19:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:04.795 14:19:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.795 14:19:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:04.795 [2024-11-27 14:19:42.047208] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:04.795 [2024-11-27 14:19:42.047449] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:04.795 [2024-11-27 14:19:42.047487] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:04.795 request: 00:20:04.795 { 00:20:04.795 "base_bdev": "BaseBdev1", 00:20:04.795 "raid_bdev": "raid_bdev1", 00:20:04.795 "method": "bdev_raid_add_base_bdev", 00:20:04.795 "req_id": 1 00:20:04.795 } 00:20:04.795 Got JSON-RPC error response 00:20:04.795 response: 00:20:04.795 { 00:20:04.795 "code": -22, 00:20:04.795 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:04.795 } 00:20:04.795 14:19:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:04.795 14:19:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@655 -- # es=1 00:20:04.795 14:19:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:04.795 14:19:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:04.795 14:19:42 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:04.795 14:19:42 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:06.173 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:06.173 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:06.173 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:06.173 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:06.173 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:06.173 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:06.173 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:06.173 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:06.173 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:06.173 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:06.173 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.173 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.173 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.173 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.173 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.173 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:06.173 "name": "raid_bdev1", 00:20:06.173 "uuid": "b2b6a42d-f2b6-4d3a-af74-36641b88bb9f", 00:20:06.173 "strip_size_kb": 0, 00:20:06.173 "state": "online", 00:20:06.173 "raid_level": "raid1", 00:20:06.173 "superblock": true, 00:20:06.173 "num_base_bdevs": 2, 00:20:06.173 "num_base_bdevs_discovered": 1, 00:20:06.173 "num_base_bdevs_operational": 1, 00:20:06.173 "base_bdevs_list": [ 00:20:06.173 { 00:20:06.173 "name": null, 00:20:06.173 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.173 "is_configured": false, 00:20:06.173 "data_offset": 0, 00:20:06.173 "data_size": 7936 00:20:06.173 }, 00:20:06.173 { 00:20:06.173 "name": "BaseBdev2", 00:20:06.173 "uuid": "c4881d4e-a86f-53a7-91e1-d4fdea5f05a7", 00:20:06.173 "is_configured": true, 00:20:06.173 "data_offset": 256, 00:20:06.173 "data_size": 7936 00:20:06.173 } 00:20:06.173 ] 00:20:06.173 }' 00:20:06.173 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:06.173 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.431 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:06.431 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:06.431 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:06.431 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:06.431 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:06.431 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:06.431 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.431 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:06.431 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:06.432 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.432 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:06.432 "name": "raid_bdev1", 00:20:06.432 "uuid": "b2b6a42d-f2b6-4d3a-af74-36641b88bb9f", 00:20:06.432 "strip_size_kb": 0, 00:20:06.432 "state": "online", 00:20:06.432 "raid_level": "raid1", 00:20:06.432 "superblock": true, 00:20:06.432 "num_base_bdevs": 2, 00:20:06.432 "num_base_bdevs_discovered": 1, 00:20:06.432 "num_base_bdevs_operational": 1, 00:20:06.432 "base_bdevs_list": [ 00:20:06.432 { 00:20:06.432 "name": null, 00:20:06.432 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:06.432 "is_configured": false, 00:20:06.432 "data_offset": 0, 00:20:06.432 "data_size": 7936 00:20:06.432 }, 00:20:06.432 { 00:20:06.432 "name": "BaseBdev2", 00:20:06.432 "uuid": "c4881d4e-a86f-53a7-91e1-d4fdea5f05a7", 00:20:06.432 "is_configured": true, 00:20:06.432 "data_offset": 256, 00:20:06.432 "data_size": 7936 00:20:06.432 } 00:20:06.432 ] 00:20:06.432 }' 00:20:06.432 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:06.432 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:06.432 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:06.690 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:06.690 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@784 -- # killprocess 86836 00:20:06.690 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@954 -- # '[' -z 86836 ']' 00:20:06.690 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@958 -- # kill -0 86836 00:20:06.690 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # uname 00:20:06.690 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:06.690 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 86836 00:20:06.690 killing process with pid 86836 00:20:06.690 Received shutdown signal, test time was about 60.000000 seconds 00:20:06.690 00:20:06.690 Latency(us) 00:20:06.690 [2024-11-27T14:19:43.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:06.690 [2024-11-27T14:19:43.968Z] =================================================================================================================== 00:20:06.690 [2024-11-27T14:19:43.968Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:06.690 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:06.690 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:06.690 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@972 -- # echo 'killing process with pid 86836' 00:20:06.691 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@973 -- # kill 86836 00:20:06.691 [2024-11-27 14:19:43.750129] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:06.691 14:19:43 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@978 -- # wait 86836 00:20:06.691 [2024-11-27 14:19:43.750319] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:06.691 [2024-11-27 14:19:43.750396] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:06.691 [2024-11-27 14:19:43.750413] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:06.949 [2024-11-27 14:19:44.011073] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:07.922 ************************************ 00:20:07.922 END TEST raid_rebuild_test_sb_4k 00:20:07.922 ************************************ 00:20:07.922 14:19:45 bdev_raid.raid_rebuild_test_sb_4k -- bdev/bdev_raid.sh@786 -- # return 0 00:20:07.922 00:20:07.922 real 0m21.721s 00:20:07.922 user 0m29.474s 00:20:07.922 sys 0m2.595s 00:20:07.922 14:19:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:07.922 14:19:45 bdev_raid.raid_rebuild_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:20:07.922 14:19:45 bdev_raid -- bdev/bdev_raid.sh@1003 -- # base_malloc_params='-m 32' 00:20:07.922 14:19:45 bdev_raid -- bdev/bdev_raid.sh@1004 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:20:07.922 14:19:45 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:07.922 14:19:45 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:07.922 14:19:45 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:07.922 ************************************ 00:20:07.922 START TEST raid_state_function_test_sb_md_separate 00:20:07.922 ************************************ 00:20:07.922 14:19:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:20:07.922 14:19:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:20:07.922 14:19:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:07.922 14:19:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:07.922 14:19:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:07.922 14:19:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:07.922 14:19:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:07.922 14:19:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:07.922 14:19:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:07.922 14:19:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:07.922 14:19:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:07.922 14:19:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:07.922 14:19:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:07.922 14:19:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:07.922 14:19:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:07.922 14:19:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:07.922 14:19:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:07.922 14:19:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:07.922 14:19:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:07.922 14:19:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:20:07.922 14:19:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:20:07.922 14:19:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:07.922 14:19:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:07.922 14:19:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@229 -- # raid_pid=87539 00:20:07.922 14:19:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:07.922 Process raid pid: 87539 00:20:07.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:07.922 14:19:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 87539' 00:20:07.922 14:19:45 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@231 -- # waitforlisten 87539 00:20:07.922 14:19:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87539 ']' 00:20:07.922 14:19:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:07.922 14:19:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:07.922 14:19:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:07.922 14:19:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:07.922 14:19:45 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:08.181 [2024-11-27 14:19:45.226317] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:20:08.181 [2024-11-27 14:19:45.226499] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.181 [2024-11-27 14:19:45.422317] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:08.439 [2024-11-27 14:19:45.583921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.698 [2024-11-27 14:19:45.800462] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:08.698 [2024-11-27 14:19:45.800515] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:09.265 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:09.265 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:20:09.265 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:09.265 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.265 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:09.265 [2024-11-27 14:19:46.242743] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:09.265 [2024-11-27 14:19:46.242844] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:09.265 [2024-11-27 14:19:46.242863] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:09.265 [2024-11-27 14:19:46.242880] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:09.265 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.265 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:09.265 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:09.265 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:09.265 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:09.265 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:09.265 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:09.266 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:09.266 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:09.266 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:09.266 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:09.266 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:09.266 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.266 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.266 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:09.266 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.266 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:09.266 "name": "Existed_Raid", 00:20:09.266 "uuid": "a78c300a-5713-4f72-a7a5-52ff241b4c73", 00:20:09.266 "strip_size_kb": 0, 00:20:09.266 "state": "configuring", 00:20:09.266 "raid_level": "raid1", 00:20:09.266 "superblock": true, 00:20:09.266 "num_base_bdevs": 2, 00:20:09.266 "num_base_bdevs_discovered": 0, 00:20:09.266 "num_base_bdevs_operational": 2, 00:20:09.266 "base_bdevs_list": [ 00:20:09.266 { 00:20:09.266 "name": "BaseBdev1", 00:20:09.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.266 "is_configured": false, 00:20:09.266 "data_offset": 0, 00:20:09.266 "data_size": 0 00:20:09.266 }, 00:20:09.266 { 00:20:09.266 "name": "BaseBdev2", 00:20:09.266 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.266 "is_configured": false, 00:20:09.266 "data_offset": 0, 00:20:09.266 "data_size": 0 00:20:09.266 } 00:20:09.266 ] 00:20:09.266 }' 00:20:09.266 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:09.266 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:09.525 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:09.525 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.525 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:09.525 [2024-11-27 14:19:46.730934] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:09.525 [2024-11-27 14:19:46.731135] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:09.525 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.525 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:09.525 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.525 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:09.525 [2024-11-27 14:19:46.742947] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:09.525 [2024-11-27 14:19:46.743008] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:09.525 [2024-11-27 14:19:46.743024] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:09.525 [2024-11-27 14:19:46.743044] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:09.525 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.525 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:20:09.525 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.525 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:09.525 [2024-11-27 14:19:46.789307] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:09.525 BaseBdev1 00:20:09.525 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.525 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:09.525 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:09.525 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:09.525 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:20:09.525 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:09.525 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:09.525 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:09.525 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.525 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:09.867 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.867 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:09.867 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.867 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:09.867 [ 00:20:09.867 { 00:20:09.867 "name": "BaseBdev1", 00:20:09.867 "aliases": [ 00:20:09.867 "ad00d45b-2555-4bca-a476-dab759880df4" 00:20:09.867 ], 00:20:09.867 "product_name": "Malloc disk", 00:20:09.867 "block_size": 4096, 00:20:09.867 "num_blocks": 8192, 00:20:09.867 "uuid": "ad00d45b-2555-4bca-a476-dab759880df4", 00:20:09.867 "md_size": 32, 00:20:09.867 "md_interleave": false, 00:20:09.867 "dif_type": 0, 00:20:09.867 "assigned_rate_limits": { 00:20:09.867 "rw_ios_per_sec": 0, 00:20:09.867 "rw_mbytes_per_sec": 0, 00:20:09.867 "r_mbytes_per_sec": 0, 00:20:09.867 "w_mbytes_per_sec": 0 00:20:09.867 }, 00:20:09.867 "claimed": true, 00:20:09.867 "claim_type": "exclusive_write", 00:20:09.867 "zoned": false, 00:20:09.867 "supported_io_types": { 00:20:09.867 "read": true, 00:20:09.867 "write": true, 00:20:09.867 "unmap": true, 00:20:09.867 "flush": true, 00:20:09.867 "reset": true, 00:20:09.867 "nvme_admin": false, 00:20:09.867 "nvme_io": false, 00:20:09.867 "nvme_io_md": false, 00:20:09.867 "write_zeroes": true, 00:20:09.867 "zcopy": true, 00:20:09.867 "get_zone_info": false, 00:20:09.867 "zone_management": false, 00:20:09.867 "zone_append": false, 00:20:09.867 "compare": false, 00:20:09.867 "compare_and_write": false, 00:20:09.867 "abort": true, 00:20:09.867 "seek_hole": false, 00:20:09.867 "seek_data": false, 00:20:09.867 "copy": true, 00:20:09.867 "nvme_iov_md": false 00:20:09.867 }, 00:20:09.867 "memory_domains": [ 00:20:09.867 { 00:20:09.867 "dma_device_id": "system", 00:20:09.867 "dma_device_type": 1 00:20:09.867 }, 00:20:09.867 { 00:20:09.867 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:09.867 "dma_device_type": 2 00:20:09.867 } 00:20:09.867 ], 00:20:09.867 "driver_specific": {} 00:20:09.867 } 00:20:09.867 ] 00:20:09.867 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.867 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:20:09.867 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:09.867 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:09.867 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:09.867 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:09.867 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:09.867 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:09.867 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:09.867 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:09.867 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:09.867 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:09.867 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:09.867 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.867 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:09.867 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:09.867 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.867 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:09.867 "name": "Existed_Raid", 00:20:09.867 "uuid": "443d2fd2-fabc-4208-9665-821965587147", 00:20:09.867 "strip_size_kb": 0, 00:20:09.867 "state": "configuring", 00:20:09.867 "raid_level": "raid1", 00:20:09.867 "superblock": true, 00:20:09.867 "num_base_bdevs": 2, 00:20:09.867 "num_base_bdevs_discovered": 1, 00:20:09.867 "num_base_bdevs_operational": 2, 00:20:09.867 "base_bdevs_list": [ 00:20:09.867 { 00:20:09.867 "name": "BaseBdev1", 00:20:09.867 "uuid": "ad00d45b-2555-4bca-a476-dab759880df4", 00:20:09.867 "is_configured": true, 00:20:09.867 "data_offset": 256, 00:20:09.867 "data_size": 7936 00:20:09.867 }, 00:20:09.867 { 00:20:09.867 "name": "BaseBdev2", 00:20:09.867 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:09.867 "is_configured": false, 00:20:09.867 "data_offset": 0, 00:20:09.867 "data_size": 0 00:20:09.867 } 00:20:09.867 ] 00:20:09.867 }' 00:20:09.867 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:09.867 14:19:46 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:10.150 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:10.150 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.150 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:10.150 [2024-11-27 14:19:47.345589] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:10.150 [2024-11-27 14:19:47.345646] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:10.150 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.150 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:10.150 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.150 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:10.150 [2024-11-27 14:19:47.353644] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:10.150 [2024-11-27 14:19:47.356304] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:10.150 [2024-11-27 14:19:47.356373] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:10.150 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.150 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:10.150 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:10.150 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:10.150 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:10.150 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:10.150 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:10.150 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:10.150 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:10.150 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.150 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.150 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.150 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.150 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.150 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.150 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:10.150 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:10.150 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.150 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.150 "name": "Existed_Raid", 00:20:10.150 "uuid": "fdbc4fb1-c521-479c-832f-2a2e88925b6f", 00:20:10.150 "strip_size_kb": 0, 00:20:10.150 "state": "configuring", 00:20:10.150 "raid_level": "raid1", 00:20:10.150 "superblock": true, 00:20:10.150 "num_base_bdevs": 2, 00:20:10.150 "num_base_bdevs_discovered": 1, 00:20:10.150 "num_base_bdevs_operational": 2, 00:20:10.150 "base_bdevs_list": [ 00:20:10.150 { 00:20:10.150 "name": "BaseBdev1", 00:20:10.150 "uuid": "ad00d45b-2555-4bca-a476-dab759880df4", 00:20:10.150 "is_configured": true, 00:20:10.150 "data_offset": 256, 00:20:10.150 "data_size": 7936 00:20:10.150 }, 00:20:10.150 { 00:20:10.150 "name": "BaseBdev2", 00:20:10.150 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:10.150 "is_configured": false, 00:20:10.150 "data_offset": 0, 00:20:10.150 "data_size": 0 00:20:10.150 } 00:20:10.150 ] 00:20:10.150 }' 00:20:10.150 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.150 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:10.717 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:20:10.717 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.717 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:10.717 [2024-11-27 14:19:47.949826] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:10.717 [2024-11-27 14:19:47.950163] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:10.717 [2024-11-27 14:19:47.950186] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:10.717 [2024-11-27 14:19:47.950317] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:10.717 [2024-11-27 14:19:47.950485] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:10.717 [2024-11-27 14:19:47.950505] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:10.717 BaseBdev2 00:20:10.717 [2024-11-27 14:19:47.950623] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:10.717 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.717 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:10.717 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:10.717 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:10.717 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # local i 00:20:10.717 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:10.717 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:10.717 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:10.717 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.717 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:10.717 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.717 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:10.717 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.717 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:10.717 [ 00:20:10.717 { 00:20:10.717 "name": "BaseBdev2", 00:20:10.717 "aliases": [ 00:20:10.717 "c1211daf-ea13-477f-90d5-43802a33d868" 00:20:10.717 ], 00:20:10.717 "product_name": "Malloc disk", 00:20:10.717 "block_size": 4096, 00:20:10.717 "num_blocks": 8192, 00:20:10.717 "uuid": "c1211daf-ea13-477f-90d5-43802a33d868", 00:20:10.717 "md_size": 32, 00:20:10.717 "md_interleave": false, 00:20:10.717 "dif_type": 0, 00:20:10.717 "assigned_rate_limits": { 00:20:10.717 "rw_ios_per_sec": 0, 00:20:10.717 "rw_mbytes_per_sec": 0, 00:20:10.717 "r_mbytes_per_sec": 0, 00:20:10.717 "w_mbytes_per_sec": 0 00:20:10.717 }, 00:20:10.717 "claimed": true, 00:20:10.717 "claim_type": "exclusive_write", 00:20:10.717 "zoned": false, 00:20:10.717 "supported_io_types": { 00:20:10.717 "read": true, 00:20:10.717 "write": true, 00:20:10.717 "unmap": true, 00:20:10.717 "flush": true, 00:20:10.718 "reset": true, 00:20:10.718 "nvme_admin": false, 00:20:10.718 "nvme_io": false, 00:20:10.718 "nvme_io_md": false, 00:20:10.718 "write_zeroes": true, 00:20:10.718 "zcopy": true, 00:20:10.718 "get_zone_info": false, 00:20:10.718 "zone_management": false, 00:20:10.718 "zone_append": false, 00:20:10.718 "compare": false, 00:20:10.718 "compare_and_write": false, 00:20:10.718 "abort": true, 00:20:10.718 "seek_hole": false, 00:20:10.718 "seek_data": false, 00:20:10.718 "copy": true, 00:20:10.718 "nvme_iov_md": false 00:20:10.718 }, 00:20:10.718 "memory_domains": [ 00:20:10.718 { 00:20:10.718 "dma_device_id": "system", 00:20:10.718 "dma_device_type": 1 00:20:10.718 }, 00:20:10.718 { 00:20:10.718 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:10.718 "dma_device_type": 2 00:20:10.718 } 00:20:10.718 ], 00:20:10.718 "driver_specific": {} 00:20:10.718 } 00:20:10.718 ] 00:20:10.718 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.718 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@911 -- # return 0 00:20:10.718 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:10.718 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:10.718 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:10.718 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:10.718 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:10.718 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:10.718 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:10.718 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:10.718 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:10.718 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:10.718 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:10.718 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:10.718 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:10.718 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:10.718 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.718 14:19:47 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:10.977 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.977 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:10.977 "name": "Existed_Raid", 00:20:10.977 "uuid": "fdbc4fb1-c521-479c-832f-2a2e88925b6f", 00:20:10.977 "strip_size_kb": 0, 00:20:10.977 "state": "online", 00:20:10.977 "raid_level": "raid1", 00:20:10.977 "superblock": true, 00:20:10.977 "num_base_bdevs": 2, 00:20:10.977 "num_base_bdevs_discovered": 2, 00:20:10.977 "num_base_bdevs_operational": 2, 00:20:10.977 "base_bdevs_list": [ 00:20:10.977 { 00:20:10.977 "name": "BaseBdev1", 00:20:10.977 "uuid": "ad00d45b-2555-4bca-a476-dab759880df4", 00:20:10.977 "is_configured": true, 00:20:10.977 "data_offset": 256, 00:20:10.977 "data_size": 7936 00:20:10.977 }, 00:20:10.977 { 00:20:10.977 "name": "BaseBdev2", 00:20:10.977 "uuid": "c1211daf-ea13-477f-90d5-43802a33d868", 00:20:10.977 "is_configured": true, 00:20:10.977 "data_offset": 256, 00:20:10.977 "data_size": 7936 00:20:10.977 } 00:20:10.977 ] 00:20:10.977 }' 00:20:10.977 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:10.977 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:11.236 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:11.236 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:11.236 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:11.236 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:11.236 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:20:11.236 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:11.236 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:11.236 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:11.236 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.236 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:11.496 [2024-11-27 14:19:48.518522] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:11.496 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.496 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:11.496 "name": "Existed_Raid", 00:20:11.496 "aliases": [ 00:20:11.496 "fdbc4fb1-c521-479c-832f-2a2e88925b6f" 00:20:11.496 ], 00:20:11.496 "product_name": "Raid Volume", 00:20:11.496 "block_size": 4096, 00:20:11.496 "num_blocks": 7936, 00:20:11.496 "uuid": "fdbc4fb1-c521-479c-832f-2a2e88925b6f", 00:20:11.496 "md_size": 32, 00:20:11.496 "md_interleave": false, 00:20:11.496 "dif_type": 0, 00:20:11.496 "assigned_rate_limits": { 00:20:11.496 "rw_ios_per_sec": 0, 00:20:11.496 "rw_mbytes_per_sec": 0, 00:20:11.496 "r_mbytes_per_sec": 0, 00:20:11.496 "w_mbytes_per_sec": 0 00:20:11.496 }, 00:20:11.496 "claimed": false, 00:20:11.496 "zoned": false, 00:20:11.496 "supported_io_types": { 00:20:11.496 "read": true, 00:20:11.496 "write": true, 00:20:11.496 "unmap": false, 00:20:11.496 "flush": false, 00:20:11.496 "reset": true, 00:20:11.496 "nvme_admin": false, 00:20:11.496 "nvme_io": false, 00:20:11.496 "nvme_io_md": false, 00:20:11.496 "write_zeroes": true, 00:20:11.496 "zcopy": false, 00:20:11.496 "get_zone_info": false, 00:20:11.496 "zone_management": false, 00:20:11.496 "zone_append": false, 00:20:11.496 "compare": false, 00:20:11.496 "compare_and_write": false, 00:20:11.496 "abort": false, 00:20:11.496 "seek_hole": false, 00:20:11.496 "seek_data": false, 00:20:11.496 "copy": false, 00:20:11.496 "nvme_iov_md": false 00:20:11.496 }, 00:20:11.496 "memory_domains": [ 00:20:11.496 { 00:20:11.496 "dma_device_id": "system", 00:20:11.496 "dma_device_type": 1 00:20:11.496 }, 00:20:11.496 { 00:20:11.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.496 "dma_device_type": 2 00:20:11.496 }, 00:20:11.496 { 00:20:11.496 "dma_device_id": "system", 00:20:11.496 "dma_device_type": 1 00:20:11.496 }, 00:20:11.496 { 00:20:11.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:11.496 "dma_device_type": 2 00:20:11.496 } 00:20:11.496 ], 00:20:11.496 "driver_specific": { 00:20:11.496 "raid": { 00:20:11.496 "uuid": "fdbc4fb1-c521-479c-832f-2a2e88925b6f", 00:20:11.496 "strip_size_kb": 0, 00:20:11.496 "state": "online", 00:20:11.496 "raid_level": "raid1", 00:20:11.496 "superblock": true, 00:20:11.496 "num_base_bdevs": 2, 00:20:11.496 "num_base_bdevs_discovered": 2, 00:20:11.496 "num_base_bdevs_operational": 2, 00:20:11.496 "base_bdevs_list": [ 00:20:11.496 { 00:20:11.496 "name": "BaseBdev1", 00:20:11.496 "uuid": "ad00d45b-2555-4bca-a476-dab759880df4", 00:20:11.496 "is_configured": true, 00:20:11.496 "data_offset": 256, 00:20:11.496 "data_size": 7936 00:20:11.496 }, 00:20:11.496 { 00:20:11.496 "name": "BaseBdev2", 00:20:11.496 "uuid": "c1211daf-ea13-477f-90d5-43802a33d868", 00:20:11.496 "is_configured": true, 00:20:11.496 "data_offset": 256, 00:20:11.496 "data_size": 7936 00:20:11.496 } 00:20:11.496 ] 00:20:11.496 } 00:20:11.496 } 00:20:11.496 }' 00:20:11.496 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:11.496 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:11.496 BaseBdev2' 00:20:11.496 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:11.496 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:20:11.496 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:11.497 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:11.497 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:11.497 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.497 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:11.497 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.497 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:11.497 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:11.497 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:11.497 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:11.497 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:11.497 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.497 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:11.497 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.757 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:11.757 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:11.757 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:11.757 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.757 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:11.757 [2024-11-27 14:19:48.790311] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:11.757 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.757 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:11.757 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:20:11.757 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:11.757 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:20:11.757 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:11.757 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:11.757 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:11.757 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:11.757 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:11.757 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:11.757 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:11.757 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:11.757 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:11.757 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:11.757 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:11.757 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:11.757 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:11.757 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.757 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:11.757 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.757 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:11.757 "name": "Existed_Raid", 00:20:11.757 "uuid": "fdbc4fb1-c521-479c-832f-2a2e88925b6f", 00:20:11.757 "strip_size_kb": 0, 00:20:11.757 "state": "online", 00:20:11.757 "raid_level": "raid1", 00:20:11.757 "superblock": true, 00:20:11.757 "num_base_bdevs": 2, 00:20:11.757 "num_base_bdevs_discovered": 1, 00:20:11.757 "num_base_bdevs_operational": 1, 00:20:11.757 "base_bdevs_list": [ 00:20:11.757 { 00:20:11.757 "name": null, 00:20:11.757 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:11.757 "is_configured": false, 00:20:11.757 "data_offset": 0, 00:20:11.757 "data_size": 7936 00:20:11.757 }, 00:20:11.757 { 00:20:11.757 "name": "BaseBdev2", 00:20:11.757 "uuid": "c1211daf-ea13-477f-90d5-43802a33d868", 00:20:11.757 "is_configured": true, 00:20:11.757 "data_offset": 256, 00:20:11.757 "data_size": 7936 00:20:11.757 } 00:20:11.757 ] 00:20:11.757 }' 00:20:11.757 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:11.757 14:19:48 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:12.327 14:19:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:12.327 14:19:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:12.327 14:19:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.327 14:19:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.327 14:19:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:12.327 14:19:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:12.327 14:19:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.327 14:19:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:12.327 14:19:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:12.327 14:19:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:12.327 14:19:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.327 14:19:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:12.327 [2024-11-27 14:19:49.461227] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:12.327 [2024-11-27 14:19:49.461345] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:12.327 [2024-11-27 14:19:49.540520] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:12.327 [2024-11-27 14:19:49.540579] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:12.327 [2024-11-27 14:19:49.540598] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:12.327 14:19:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.327 14:19:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:12.327 14:19:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:12.327 14:19:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:12.327 14:19:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:12.327 14:19:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.327 14:19:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:12.327 14:19:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.327 14:19:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:12.327 14:19:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:12.327 14:19:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:12.327 14:19:49 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@326 -- # killprocess 87539 00:20:12.327 14:19:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87539 ']' 00:20:12.327 14:19:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 87539 00:20:12.327 14:19:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:20:12.587 14:19:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:12.587 14:19:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87539 00:20:12.587 killing process with pid 87539 00:20:12.587 14:19:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:12.587 14:19:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:12.587 14:19:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87539' 00:20:12.587 14:19:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 87539 00:20:12.587 [2024-11-27 14:19:49.631273] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:12.587 14:19:49 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 87539 00:20:12.587 [2024-11-27 14:19:49.646924] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:13.527 14:19:50 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@328 -- # return 0 00:20:13.527 00:20:13.527 real 0m5.616s 00:20:13.527 user 0m8.419s 00:20:13.527 sys 0m0.863s 00:20:13.527 14:19:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:13.527 ************************************ 00:20:13.527 END TEST raid_state_function_test_sb_md_separate 00:20:13.527 ************************************ 00:20:13.527 14:19:50 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:13.527 14:19:50 bdev_raid -- bdev/bdev_raid.sh@1005 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:20:13.527 14:19:50 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:13.527 14:19:50 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:13.527 14:19:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:13.527 ************************************ 00:20:13.527 START TEST raid_superblock_test_md_separate 00:20:13.527 ************************************ 00:20:13.527 14:19:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:20:13.527 14:19:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:13.527 14:19:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:13.527 14:19:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:13.527 14:19:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:13.527 14:19:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:13.527 14:19:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:13.527 14:19:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:13.527 14:19:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:13.527 14:19:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:13.527 14:19:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:13.527 14:19:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:13.527 14:19:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:13.527 14:19:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:13.527 14:19:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:13.527 14:19:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:13.527 14:19:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # raid_pid=87788 00:20:13.527 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:13.527 14:19:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@413 -- # waitforlisten 87788 00:20:13.527 14:19:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@835 -- # '[' -z 87788 ']' 00:20:13.527 14:19:50 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:13.527 14:19:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:13.527 14:19:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:13.527 14:19:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:13.527 14:19:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:13.527 14:19:50 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:13.787 [2024-11-27 14:19:50.879261] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:20:13.787 [2024-11-27 14:19:50.879820] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87788 ] 00:20:14.047 [2024-11-27 14:19:51.070138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:14.047 [2024-11-27 14:19:51.236617] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.306 [2024-11-27 14:19:51.474573] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:14.306 [2024-11-27 14:19:51.474688] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:14.874 14:19:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:14.874 14:19:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@868 -- # return 0 00:20:14.874 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:14.874 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:14.874 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:14.874 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:14.874 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:14.874 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:14.874 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:14.874 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:14.874 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc1 00:20:14.874 14:19:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.874 14:19:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:14.874 malloc1 00:20:14.874 14:19:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.874 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:14.874 14:19:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.874 14:19:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:14.874 [2024-11-27 14:19:51.912526] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:14.874 [2024-11-27 14:19:51.912769] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:14.874 [2024-11-27 14:19:51.912949] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:14.874 [2024-11-27 14:19:51.913113] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:14.874 [2024-11-27 14:19:51.915684] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:14.874 [2024-11-27 14:19:51.915897] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:14.874 pt1 00:20:14.874 14:19:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.874 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:14.874 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:14.874 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:14.874 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:14.874 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:14.874 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:14.875 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:14.875 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:14.875 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b malloc2 00:20:14.875 14:19:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.875 14:19:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:14.875 malloc2 00:20:14.875 14:19:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.875 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:14.875 14:19:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.875 14:19:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:14.875 [2024-11-27 14:19:51.966124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:14.875 [2024-11-27 14:19:51.966233] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:14.875 [2024-11-27 14:19:51.966265] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:14.875 [2024-11-27 14:19:51.966278] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:14.875 [2024-11-27 14:19:51.968865] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:14.875 [2024-11-27 14:19:51.968915] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:14.875 pt2 00:20:14.875 14:19:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.875 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:14.875 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:14.875 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:20:14.875 14:19:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.875 14:19:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:14.875 [2024-11-27 14:19:51.978154] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:14.875 [2024-11-27 14:19:51.980526] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:14.875 [2024-11-27 14:19:51.980749] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:14.875 [2024-11-27 14:19:51.980797] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:14.875 [2024-11-27 14:19:51.980886] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:14.875 [2024-11-27 14:19:51.981031] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:14.875 [2024-11-27 14:19:51.981049] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:14.875 [2024-11-27 14:19:51.981178] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:14.875 14:19:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.875 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:14.875 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:14.875 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:14.875 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:14.875 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:14.875 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:14.875 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:14.875 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:14.875 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:14.875 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:14.875 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:14.875 14:19:51 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:14.875 14:19:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.875 14:19:51 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:14.875 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.875 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:14.875 "name": "raid_bdev1", 00:20:14.875 "uuid": "1662b262-3804-4526-91eb-383a84dcdc46", 00:20:14.875 "strip_size_kb": 0, 00:20:14.875 "state": "online", 00:20:14.875 "raid_level": "raid1", 00:20:14.875 "superblock": true, 00:20:14.875 "num_base_bdevs": 2, 00:20:14.875 "num_base_bdevs_discovered": 2, 00:20:14.875 "num_base_bdevs_operational": 2, 00:20:14.875 "base_bdevs_list": [ 00:20:14.875 { 00:20:14.875 "name": "pt1", 00:20:14.875 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:14.875 "is_configured": true, 00:20:14.875 "data_offset": 256, 00:20:14.875 "data_size": 7936 00:20:14.875 }, 00:20:14.875 { 00:20:14.875 "name": "pt2", 00:20:14.875 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:14.875 "is_configured": true, 00:20:14.875 "data_offset": 256, 00:20:14.875 "data_size": 7936 00:20:14.875 } 00:20:14.875 ] 00:20:14.875 }' 00:20:14.875 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:14.875 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:15.444 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:15.444 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:15.444 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:15.444 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:15.444 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:20:15.444 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:15.444 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:15.444 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.444 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:15.444 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:15.444 [2024-11-27 14:19:52.534639] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:15.444 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.444 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:15.444 "name": "raid_bdev1", 00:20:15.444 "aliases": [ 00:20:15.444 "1662b262-3804-4526-91eb-383a84dcdc46" 00:20:15.444 ], 00:20:15.444 "product_name": "Raid Volume", 00:20:15.444 "block_size": 4096, 00:20:15.444 "num_blocks": 7936, 00:20:15.444 "uuid": "1662b262-3804-4526-91eb-383a84dcdc46", 00:20:15.444 "md_size": 32, 00:20:15.444 "md_interleave": false, 00:20:15.444 "dif_type": 0, 00:20:15.444 "assigned_rate_limits": { 00:20:15.444 "rw_ios_per_sec": 0, 00:20:15.444 "rw_mbytes_per_sec": 0, 00:20:15.444 "r_mbytes_per_sec": 0, 00:20:15.444 "w_mbytes_per_sec": 0 00:20:15.444 }, 00:20:15.444 "claimed": false, 00:20:15.444 "zoned": false, 00:20:15.444 "supported_io_types": { 00:20:15.444 "read": true, 00:20:15.444 "write": true, 00:20:15.444 "unmap": false, 00:20:15.444 "flush": false, 00:20:15.444 "reset": true, 00:20:15.444 "nvme_admin": false, 00:20:15.444 "nvme_io": false, 00:20:15.444 "nvme_io_md": false, 00:20:15.444 "write_zeroes": true, 00:20:15.444 "zcopy": false, 00:20:15.444 "get_zone_info": false, 00:20:15.444 "zone_management": false, 00:20:15.444 "zone_append": false, 00:20:15.444 "compare": false, 00:20:15.444 "compare_and_write": false, 00:20:15.444 "abort": false, 00:20:15.444 "seek_hole": false, 00:20:15.444 "seek_data": false, 00:20:15.444 "copy": false, 00:20:15.444 "nvme_iov_md": false 00:20:15.444 }, 00:20:15.444 "memory_domains": [ 00:20:15.444 { 00:20:15.444 "dma_device_id": "system", 00:20:15.444 "dma_device_type": 1 00:20:15.444 }, 00:20:15.444 { 00:20:15.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:15.444 "dma_device_type": 2 00:20:15.444 }, 00:20:15.444 { 00:20:15.444 "dma_device_id": "system", 00:20:15.444 "dma_device_type": 1 00:20:15.444 }, 00:20:15.444 { 00:20:15.444 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:15.444 "dma_device_type": 2 00:20:15.444 } 00:20:15.444 ], 00:20:15.444 "driver_specific": { 00:20:15.444 "raid": { 00:20:15.444 "uuid": "1662b262-3804-4526-91eb-383a84dcdc46", 00:20:15.444 "strip_size_kb": 0, 00:20:15.444 "state": "online", 00:20:15.444 "raid_level": "raid1", 00:20:15.444 "superblock": true, 00:20:15.444 "num_base_bdevs": 2, 00:20:15.444 "num_base_bdevs_discovered": 2, 00:20:15.444 "num_base_bdevs_operational": 2, 00:20:15.444 "base_bdevs_list": [ 00:20:15.444 { 00:20:15.444 "name": "pt1", 00:20:15.444 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:15.444 "is_configured": true, 00:20:15.444 "data_offset": 256, 00:20:15.444 "data_size": 7936 00:20:15.444 }, 00:20:15.444 { 00:20:15.444 "name": "pt2", 00:20:15.444 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:15.444 "is_configured": true, 00:20:15.444 "data_offset": 256, 00:20:15.444 "data_size": 7936 00:20:15.444 } 00:20:15.444 ] 00:20:15.444 } 00:20:15.444 } 00:20:15.444 }' 00:20:15.444 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:15.444 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:15.444 pt2' 00:20:15.444 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:15.444 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:20:15.444 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:15.444 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:15.444 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:15.444 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.444 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:15.444 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.703 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:15.703 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:15.703 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:15.703 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:15.703 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.703 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:15.703 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:15.703 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.703 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:15.703 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:15.703 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:15.703 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.703 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:15.703 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:15.703 [2024-11-27 14:19:52.806707] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:15.703 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.703 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=1662b262-3804-4526-91eb-383a84dcdc46 00:20:15.703 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@436 -- # '[' -z 1662b262-3804-4526-91eb-383a84dcdc46 ']' 00:20:15.703 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:15.703 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.703 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:15.703 [2024-11-27 14:19:52.862391] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:15.703 [2024-11-27 14:19:52.862418] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:15.703 [2024-11-27 14:19:52.862515] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:15.703 [2024-11-27 14:19:52.862587] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:15.703 [2024-11-27 14:19:52.862606] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:15.703 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.703 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.703 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.703 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:15.703 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:15.703 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.703 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:15.703 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:15.703 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:15.703 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:15.703 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.703 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:15.703 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.704 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:15.704 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:15.704 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.704 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:15.704 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.704 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:15.704 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:15.704 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.704 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:15.704 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.963 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:15.963 14:19:52 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:15.963 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:20:15.963 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:15.963 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:15.963 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:15.963 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:15.963 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:15.963 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:15.963 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.963 14:19:52 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:15.963 [2024-11-27 14:19:53.002468] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:15.963 [2024-11-27 14:19:53.004969] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:15.963 [2024-11-27 14:19:53.005092] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:15.963 [2024-11-27 14:19:53.005215] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:15.963 [2024-11-27 14:19:53.005239] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:15.963 [2024-11-27 14:19:53.005253] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:15.963 request: 00:20:15.963 { 00:20:15.963 "name": "raid_bdev1", 00:20:15.963 "raid_level": "raid1", 00:20:15.963 "base_bdevs": [ 00:20:15.963 "malloc1", 00:20:15.963 "malloc2" 00:20:15.963 ], 00:20:15.963 "superblock": false, 00:20:15.963 "method": "bdev_raid_create", 00:20:15.963 "req_id": 1 00:20:15.963 } 00:20:15.963 Got JSON-RPC error response 00:20:15.963 response: 00:20:15.963 { 00:20:15.963 "code": -17, 00:20:15.963 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:15.963 } 00:20:15.963 14:19:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:15.963 14:19:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@655 -- # es=1 00:20:15.963 14:19:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:15.963 14:19:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:15.963 14:19:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:15.963 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:15.963 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.963 14:19:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.963 14:19:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:15.963 14:19:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.963 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:15.963 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:15.964 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:15.964 14:19:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.964 14:19:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:15.964 [2024-11-27 14:19:53.070488] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:15.964 [2024-11-27 14:19:53.070580] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:15.964 [2024-11-27 14:19:53.070604] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:15.964 [2024-11-27 14:19:53.070620] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:15.964 [2024-11-27 14:19:53.073334] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:15.964 [2024-11-27 14:19:53.073411] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:15.964 [2024-11-27 14:19:53.073481] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:15.964 [2024-11-27 14:19:53.073550] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:15.964 pt1 00:20:15.964 14:19:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.964 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:15.964 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:15.964 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:15.964 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:15.964 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:15.964 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:15.964 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:15.964 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:15.964 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:15.964 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:15.964 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:15.964 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:15.964 14:19:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.964 14:19:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:15.964 14:19:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.964 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:15.964 "name": "raid_bdev1", 00:20:15.964 "uuid": "1662b262-3804-4526-91eb-383a84dcdc46", 00:20:15.964 "strip_size_kb": 0, 00:20:15.964 "state": "configuring", 00:20:15.964 "raid_level": "raid1", 00:20:15.964 "superblock": true, 00:20:15.964 "num_base_bdevs": 2, 00:20:15.964 "num_base_bdevs_discovered": 1, 00:20:15.964 "num_base_bdevs_operational": 2, 00:20:15.964 "base_bdevs_list": [ 00:20:15.964 { 00:20:15.964 "name": "pt1", 00:20:15.964 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:15.964 "is_configured": true, 00:20:15.964 "data_offset": 256, 00:20:15.964 "data_size": 7936 00:20:15.964 }, 00:20:15.964 { 00:20:15.964 "name": null, 00:20:15.964 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:15.964 "is_configured": false, 00:20:15.964 "data_offset": 256, 00:20:15.964 "data_size": 7936 00:20:15.964 } 00:20:15.964 ] 00:20:15.964 }' 00:20:15.964 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:15.964 14:19:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:16.532 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:20:16.532 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:16.532 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:16.532 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:16.532 14:19:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.532 14:19:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:16.532 [2024-11-27 14:19:53.598622] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:16.532 [2024-11-27 14:19:53.598803] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:16.532 [2024-11-27 14:19:53.598892] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:16.532 [2024-11-27 14:19:53.599040] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:16.532 [2024-11-27 14:19:53.599330] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:16.532 [2024-11-27 14:19:53.599374] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:16.532 [2024-11-27 14:19:53.599442] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:16.532 [2024-11-27 14:19:53.599477] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:16.532 [2024-11-27 14:19:53.599616] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:16.532 [2024-11-27 14:19:53.599638] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:16.532 [2024-11-27 14:19:53.599730] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:16.532 [2024-11-27 14:19:53.599914] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:16.532 [2024-11-27 14:19:53.599940] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:16.532 [2024-11-27 14:19:53.600061] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:16.532 pt2 00:20:16.532 14:19:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.532 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:16.532 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:16.532 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:16.532 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:16.532 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:16.532 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:16.532 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:16.532 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:16.532 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:16.532 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:16.532 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:16.532 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:16.532 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:16.532 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:16.532 14:19:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.532 14:19:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:16.532 14:19:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.532 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:16.532 "name": "raid_bdev1", 00:20:16.532 "uuid": "1662b262-3804-4526-91eb-383a84dcdc46", 00:20:16.532 "strip_size_kb": 0, 00:20:16.532 "state": "online", 00:20:16.532 "raid_level": "raid1", 00:20:16.532 "superblock": true, 00:20:16.532 "num_base_bdevs": 2, 00:20:16.532 "num_base_bdevs_discovered": 2, 00:20:16.532 "num_base_bdevs_operational": 2, 00:20:16.532 "base_bdevs_list": [ 00:20:16.532 { 00:20:16.532 "name": "pt1", 00:20:16.532 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:16.532 "is_configured": true, 00:20:16.532 "data_offset": 256, 00:20:16.532 "data_size": 7936 00:20:16.532 }, 00:20:16.532 { 00:20:16.532 "name": "pt2", 00:20:16.532 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:16.532 "is_configured": true, 00:20:16.532 "data_offset": 256, 00:20:16.532 "data_size": 7936 00:20:16.532 } 00:20:16.532 ] 00:20:16.532 }' 00:20:16.532 14:19:53 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:16.532 14:19:53 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@184 -- # local name 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:17.128 [2024-11-27 14:19:54.139229] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:17.128 "name": "raid_bdev1", 00:20:17.128 "aliases": [ 00:20:17.128 "1662b262-3804-4526-91eb-383a84dcdc46" 00:20:17.128 ], 00:20:17.128 "product_name": "Raid Volume", 00:20:17.128 "block_size": 4096, 00:20:17.128 "num_blocks": 7936, 00:20:17.128 "uuid": "1662b262-3804-4526-91eb-383a84dcdc46", 00:20:17.128 "md_size": 32, 00:20:17.128 "md_interleave": false, 00:20:17.128 "dif_type": 0, 00:20:17.128 "assigned_rate_limits": { 00:20:17.128 "rw_ios_per_sec": 0, 00:20:17.128 "rw_mbytes_per_sec": 0, 00:20:17.128 "r_mbytes_per_sec": 0, 00:20:17.128 "w_mbytes_per_sec": 0 00:20:17.128 }, 00:20:17.128 "claimed": false, 00:20:17.128 "zoned": false, 00:20:17.128 "supported_io_types": { 00:20:17.128 "read": true, 00:20:17.128 "write": true, 00:20:17.128 "unmap": false, 00:20:17.128 "flush": false, 00:20:17.128 "reset": true, 00:20:17.128 "nvme_admin": false, 00:20:17.128 "nvme_io": false, 00:20:17.128 "nvme_io_md": false, 00:20:17.128 "write_zeroes": true, 00:20:17.128 "zcopy": false, 00:20:17.128 "get_zone_info": false, 00:20:17.128 "zone_management": false, 00:20:17.128 "zone_append": false, 00:20:17.128 "compare": false, 00:20:17.128 "compare_and_write": false, 00:20:17.128 "abort": false, 00:20:17.128 "seek_hole": false, 00:20:17.128 "seek_data": false, 00:20:17.128 "copy": false, 00:20:17.128 "nvme_iov_md": false 00:20:17.128 }, 00:20:17.128 "memory_domains": [ 00:20:17.128 { 00:20:17.128 "dma_device_id": "system", 00:20:17.128 "dma_device_type": 1 00:20:17.128 }, 00:20:17.128 { 00:20:17.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.128 "dma_device_type": 2 00:20:17.128 }, 00:20:17.128 { 00:20:17.128 "dma_device_id": "system", 00:20:17.128 "dma_device_type": 1 00:20:17.128 }, 00:20:17.128 { 00:20:17.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:17.128 "dma_device_type": 2 00:20:17.128 } 00:20:17.128 ], 00:20:17.128 "driver_specific": { 00:20:17.128 "raid": { 00:20:17.128 "uuid": "1662b262-3804-4526-91eb-383a84dcdc46", 00:20:17.128 "strip_size_kb": 0, 00:20:17.128 "state": "online", 00:20:17.128 "raid_level": "raid1", 00:20:17.128 "superblock": true, 00:20:17.128 "num_base_bdevs": 2, 00:20:17.128 "num_base_bdevs_discovered": 2, 00:20:17.128 "num_base_bdevs_operational": 2, 00:20:17.128 "base_bdevs_list": [ 00:20:17.128 { 00:20:17.128 "name": "pt1", 00:20:17.128 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:17.128 "is_configured": true, 00:20:17.128 "data_offset": 256, 00:20:17.128 "data_size": 7936 00:20:17.128 }, 00:20:17.128 { 00:20:17.128 "name": "pt2", 00:20:17.128 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:17.128 "is_configured": true, 00:20:17.128 "data_offset": 256, 00:20:17.128 "data_size": 7936 00:20:17.128 } 00:20:17.128 ] 00:20:17.128 } 00:20:17.128 } 00:20:17.128 }' 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:17.128 pt2' 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4096 32 false 0' 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4096 32 false 0' 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@193 -- # [[ 4096 32 false 0 == \4\0\9\6\ \3\2\ \f\a\l\s\e\ \0 ]] 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.128 14:19:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:17.128 [2024-11-27 14:19:54.399326] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:17.387 14:19:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.387 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@487 -- # '[' 1662b262-3804-4526-91eb-383a84dcdc46 '!=' 1662b262-3804-4526-91eb-383a84dcdc46 ']' 00:20:17.387 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:20:17.387 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:17.387 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@199 -- # return 0 00:20:17.387 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:17.387 14:19:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.387 14:19:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:17.387 [2024-11-27 14:19:54.450973] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:17.387 14:19:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.387 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:17.387 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:17.387 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:17.387 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:17.387 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:17.387 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:17.387 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.387 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.387 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.387 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.387 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.387 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.387 14:19:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.387 14:19:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:17.387 14:19:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.387 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.387 "name": "raid_bdev1", 00:20:17.387 "uuid": "1662b262-3804-4526-91eb-383a84dcdc46", 00:20:17.387 "strip_size_kb": 0, 00:20:17.387 "state": "online", 00:20:17.387 "raid_level": "raid1", 00:20:17.387 "superblock": true, 00:20:17.387 "num_base_bdevs": 2, 00:20:17.387 "num_base_bdevs_discovered": 1, 00:20:17.387 "num_base_bdevs_operational": 1, 00:20:17.387 "base_bdevs_list": [ 00:20:17.387 { 00:20:17.387 "name": null, 00:20:17.387 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.387 "is_configured": false, 00:20:17.387 "data_offset": 0, 00:20:17.387 "data_size": 7936 00:20:17.387 }, 00:20:17.387 { 00:20:17.387 "name": "pt2", 00:20:17.387 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:17.387 "is_configured": true, 00:20:17.387 "data_offset": 256, 00:20:17.387 "data_size": 7936 00:20:17.387 } 00:20:17.387 ] 00:20:17.387 }' 00:20:17.387 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.387 14:19:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:17.956 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:17.956 14:19:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.956 14:19:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:17.956 [2024-11-27 14:19:54.951111] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:17.956 [2024-11-27 14:19:54.951379] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:17.956 [2024-11-27 14:19:54.951501] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:17.956 [2024-11-27 14:19:54.951568] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:17.956 [2024-11-27 14:19:54.951587] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:17.956 14:19:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.956 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.956 14:19:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.956 14:19:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:17.956 14:19:54 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:17.956 14:19:54 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.956 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:17.956 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:17.956 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:17.956 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:17.956 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:17.956 14:19:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.956 14:19:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:17.956 14:19:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.956 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:17.956 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:17.956 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:17.956 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:17.956 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # i=1 00:20:17.956 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:17.956 14:19:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.956 14:19:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:17.956 [2024-11-27 14:19:55.027205] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:17.956 [2024-11-27 14:19:55.027487] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:17.956 [2024-11-27 14:19:55.027522] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:17.956 [2024-11-27 14:19:55.027539] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:17.956 [2024-11-27 14:19:55.030367] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:17.956 [2024-11-27 14:19:55.030429] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:17.956 [2024-11-27 14:19:55.030503] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:17.956 [2024-11-27 14:19:55.030593] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:17.956 [2024-11-27 14:19:55.030709] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:17.956 [2024-11-27 14:19:55.030730] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:17.956 [2024-11-27 14:19:55.031019] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:17.956 [2024-11-27 14:19:55.031406] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:17.956 pt2 00:20:17.956 [2024-11-27 14:19:55.031564] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:17.956 [2024-11-27 14:19:55.031776] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:17.956 14:19:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.956 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:17.956 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:17.956 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:17.956 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:17.956 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:17.956 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:17.956 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:17.956 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:17.956 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:17.956 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:17.956 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:17.956 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:17.956 14:19:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.956 14:19:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:17.956 14:19:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.956 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:17.956 "name": "raid_bdev1", 00:20:17.956 "uuid": "1662b262-3804-4526-91eb-383a84dcdc46", 00:20:17.956 "strip_size_kb": 0, 00:20:17.956 "state": "online", 00:20:17.956 "raid_level": "raid1", 00:20:17.956 "superblock": true, 00:20:17.956 "num_base_bdevs": 2, 00:20:17.956 "num_base_bdevs_discovered": 1, 00:20:17.956 "num_base_bdevs_operational": 1, 00:20:17.956 "base_bdevs_list": [ 00:20:17.956 { 00:20:17.956 "name": null, 00:20:17.956 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:17.956 "is_configured": false, 00:20:17.956 "data_offset": 256, 00:20:17.957 "data_size": 7936 00:20:17.957 }, 00:20:17.957 { 00:20:17.957 "name": "pt2", 00:20:17.957 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:17.957 "is_configured": true, 00:20:17.957 "data_offset": 256, 00:20:17.957 "data_size": 7936 00:20:17.957 } 00:20:17.957 ] 00:20:17.957 }' 00:20:17.957 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:17.957 14:19:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:18.526 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:18.526 14:19:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.526 14:19:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:18.526 [2024-11-27 14:19:55.539900] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:18.526 [2024-11-27 14:19:55.539934] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:18.526 [2024-11-27 14:19:55.540030] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:18.526 [2024-11-27 14:19:55.540099] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:18.526 [2024-11-27 14:19:55.540129] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:18.526 14:19:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.526 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.526 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:18.526 14:19:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.526 14:19:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:18.526 14:19:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.526 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:18.526 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:18.526 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:20:18.526 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:18.526 14:19:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.526 14:19:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:18.526 [2024-11-27 14:19:55.603960] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:18.526 [2024-11-27 14:19:55.604044] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:18.526 [2024-11-27 14:19:55.604081] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:18.526 [2024-11-27 14:19:55.604095] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:18.526 [2024-11-27 14:19:55.606667] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:18.526 [2024-11-27 14:19:55.606705] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:18.526 [2024-11-27 14:19:55.606850] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:18.526 [2024-11-27 14:19:55.606920] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:18.526 [2024-11-27 14:19:55.607089] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:18.526 [2024-11-27 14:19:55.607107] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:18.526 [2024-11-27 14:19:55.607161] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:18.526 [2024-11-27 14:19:55.607239] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:18.527 [2024-11-27 14:19:55.607351] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:18.527 [2024-11-27 14:19:55.607603] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:18.527 [2024-11-27 14:19:55.607714] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:18.527 [2024-11-27 14:19:55.607907] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:18.527 [2024-11-27 14:19:55.607926] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:18.527 [2024-11-27 14:19:55.608107] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:18.527 pt1 00:20:18.527 14:19:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.527 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:20:18.527 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:18.527 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:18.527 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:18.527 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:18.527 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:18.527 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:18.527 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:18.527 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:18.527 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:18.527 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:18.527 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:18.527 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:18.527 14:19:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.527 14:19:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:18.527 14:19:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.527 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:18.527 "name": "raid_bdev1", 00:20:18.527 "uuid": "1662b262-3804-4526-91eb-383a84dcdc46", 00:20:18.527 "strip_size_kb": 0, 00:20:18.527 "state": "online", 00:20:18.527 "raid_level": "raid1", 00:20:18.527 "superblock": true, 00:20:18.527 "num_base_bdevs": 2, 00:20:18.527 "num_base_bdevs_discovered": 1, 00:20:18.527 "num_base_bdevs_operational": 1, 00:20:18.527 "base_bdevs_list": [ 00:20:18.527 { 00:20:18.527 "name": null, 00:20:18.527 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:18.527 "is_configured": false, 00:20:18.527 "data_offset": 256, 00:20:18.527 "data_size": 7936 00:20:18.527 }, 00:20:18.527 { 00:20:18.527 "name": "pt2", 00:20:18.527 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:18.527 "is_configured": true, 00:20:18.527 "data_offset": 256, 00:20:18.527 "data_size": 7936 00:20:18.527 } 00:20:18.527 ] 00:20:18.527 }' 00:20:18.527 14:19:55 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:18.527 14:19:55 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:19.094 14:19:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:19.094 14:19:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.094 14:19:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:19.094 14:19:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:19.094 14:19:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.094 14:19:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:19.094 14:19:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:19.094 14:19:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.094 14:19:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:19.094 14:19:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:19.094 [2024-11-27 14:19:56.176601] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:19.094 14:19:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.094 14:19:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@558 -- # '[' 1662b262-3804-4526-91eb-383a84dcdc46 '!=' 1662b262-3804-4526-91eb-383a84dcdc46 ']' 00:20:19.094 14:19:56 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@563 -- # killprocess 87788 00:20:19.094 14:19:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@954 -- # '[' -z 87788 ']' 00:20:19.094 14:19:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # kill -0 87788 00:20:19.094 14:19:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # uname 00:20:19.094 14:19:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:19.094 14:19:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 87788 00:20:19.094 killing process with pid 87788 00:20:19.094 14:19:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:19.094 14:19:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:19.094 14:19:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 87788' 00:20:19.094 14:19:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@973 -- # kill 87788 00:20:19.094 14:19:56 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@978 -- # wait 87788 00:20:19.094 [2024-11-27 14:19:56.251312] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:19.094 [2024-11-27 14:19:56.251483] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:19.094 [2024-11-27 14:19:56.251571] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:19.094 [2024-11-27 14:19:56.251622] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:19.353 [2024-11-27 14:19:56.439963] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:20.290 14:19:57 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@565 -- # return 0 00:20:20.290 00:20:20.290 real 0m6.673s 00:20:20.290 user 0m10.556s 00:20:20.290 sys 0m1.028s 00:20:20.290 14:19:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:20.290 14:19:57 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:20.290 ************************************ 00:20:20.290 END TEST raid_superblock_test_md_separate 00:20:20.290 ************************************ 00:20:20.290 14:19:57 bdev_raid -- bdev/bdev_raid.sh@1006 -- # '[' true = true ']' 00:20:20.290 14:19:57 bdev_raid -- bdev/bdev_raid.sh@1007 -- # run_test raid_rebuild_test_sb_md_separate raid_rebuild_test raid1 2 true false true 00:20:20.290 14:19:57 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:20.290 14:19:57 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:20.290 14:19:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:20.290 ************************************ 00:20:20.290 START TEST raid_rebuild_test_sb_md_separate 00:20:20.290 ************************************ 00:20:20.290 14:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false true 00:20:20.290 14:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:20.290 14:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:20.290 14:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:20.290 14:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:20.290 14:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@573 -- # local verify=true 00:20:20.290 14:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:20.290 14:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:20.290 14:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:20.290 14:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:20.290 14:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:20.290 14:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:20.290 14:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:20.290 14:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:20.290 14:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:20.290 14:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:20.290 14:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:20.290 14:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:20.290 14:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:20.290 14:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:20.290 14:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:20.290 14:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:20.290 14:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:20.290 14:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:20.290 14:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:20.290 14:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@597 -- # raid_pid=88120 00:20:20.290 14:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@598 -- # waitforlisten 88120 00:20:20.290 14:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:20.290 14:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@835 -- # '[' -z 88120 ']' 00:20:20.290 14:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.290 14:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:20.290 14:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.290 14:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:20.290 14:19:57 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:20.550 [2024-11-27 14:19:57.613846] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:20:20.550 [2024-11-27 14:19:57.614294] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88120 ] 00:20:20.550 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:20.550 Zero copy mechanism will not be used. 00:20:20.550 [2024-11-27 14:19:57.808062] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.809 [2024-11-27 14:19:57.950576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:21.068 [2024-11-27 14:19:58.146250] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:21.068 [2024-11-27 14:19:58.146502] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:21.637 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:21.637 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@868 -- # return 0 00:20:21.637 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:21.637 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev1_malloc 00:20:21.637 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.637 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:21.637 BaseBdev1_malloc 00:20:21.637 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.637 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:21.637 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.637 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:21.637 [2024-11-27 14:19:58.688280] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:21.637 [2024-11-27 14:19:58.688520] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.637 [2024-11-27 14:19:58.688566] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:21.637 [2024-11-27 14:19:58.688588] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.637 [2024-11-27 14:19:58.691338] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.637 [2024-11-27 14:19:58.691397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:21.637 BaseBdev1 00:20:21.637 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.637 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:21.637 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b BaseBdev2_malloc 00:20:21.637 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.637 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:21.637 BaseBdev2_malloc 00:20:21.637 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.637 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:21.637 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.637 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:21.637 [2024-11-27 14:19:58.740021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:21.637 [2024-11-27 14:19:58.740295] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.637 [2024-11-27 14:19:58.740368] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:21.637 [2024-11-27 14:19:58.740572] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.637 [2024-11-27 14:19:58.743089] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.637 [2024-11-27 14:19:58.743329] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:21.637 BaseBdev2 00:20:21.637 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.637 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -b spare_malloc 00:20:21.637 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.637 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:21.637 spare_malloc 00:20:21.637 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.637 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:21.637 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.637 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:21.637 spare_delay 00:20:21.637 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.637 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:21.638 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.638 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:21.638 [2024-11-27 14:19:58.809861] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:21.638 [2024-11-27 14:19:58.810119] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:21.638 [2024-11-27 14:19:58.810192] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:21.638 [2024-11-27 14:19:58.810403] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:21.638 [2024-11-27 14:19:58.812883] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:21.638 spare 00:20:21.638 [2024-11-27 14:19:58.813079] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:21.638 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.638 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:21.638 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.638 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:21.638 [2024-11-27 14:19:58.817934] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:21.638 [2024-11-27 14:19:58.820484] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:21.638 [2024-11-27 14:19:58.820895] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:21.638 [2024-11-27 14:19:58.821077] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:21.638 [2024-11-27 14:19:58.821238] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:21.638 [2024-11-27 14:19:58.821526] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:21.638 [2024-11-27 14:19:58.821632] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:21.638 [2024-11-27 14:19:58.821966] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:21.638 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.638 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:21.638 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:21.638 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:21.638 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:21.638 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:21.638 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:21.638 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:21.638 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:21.638 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:21.638 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:21.638 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:21.638 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:21.638 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.638 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:21.638 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.638 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:21.638 "name": "raid_bdev1", 00:20:21.638 "uuid": "100785a4-ed5b-4a9d-9cbe-1d353c3d979b", 00:20:21.638 "strip_size_kb": 0, 00:20:21.638 "state": "online", 00:20:21.638 "raid_level": "raid1", 00:20:21.638 "superblock": true, 00:20:21.638 "num_base_bdevs": 2, 00:20:21.638 "num_base_bdevs_discovered": 2, 00:20:21.638 "num_base_bdevs_operational": 2, 00:20:21.638 "base_bdevs_list": [ 00:20:21.638 { 00:20:21.638 "name": "BaseBdev1", 00:20:21.638 "uuid": "4d9ae955-65a4-53ad-85d6-385e9c9908cf", 00:20:21.638 "is_configured": true, 00:20:21.638 "data_offset": 256, 00:20:21.638 "data_size": 7936 00:20:21.638 }, 00:20:21.638 { 00:20:21.638 "name": "BaseBdev2", 00:20:21.638 "uuid": "ddd5b1d6-97a7-5606-bc21-6bcdf03bae69", 00:20:21.638 "is_configured": true, 00:20:21.638 "data_offset": 256, 00:20:21.638 "data_size": 7936 00:20:21.638 } 00:20:21.638 ] 00:20:21.638 }' 00:20:21.638 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:21.638 14:19:58 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:22.216 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:22.216 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.216 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:22.216 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:22.216 [2024-11-27 14:19:59.346516] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:22.216 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.216 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:20:22.216 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:22.216 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:22.216 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.216 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:22.216 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.216 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:20:22.216 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:22.217 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@624 -- # '[' true = true ']' 00:20:22.217 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@625 -- # local write_unit_size 00:20:22.217 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@628 -- # nbd_start_disks /var/tmp/spdk.sock raid_bdev1 /dev/nbd0 00:20:22.217 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:22.217 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('raid_bdev1') 00:20:22.217 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:22.217 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:20:22.217 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:22.217 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:20:22.217 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:22.217 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:22.217 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk raid_bdev1 /dev/nbd0 00:20:22.480 [2024-11-27 14:19:59.738389] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:22.480 /dev/nbd0 00:20:22.739 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:22.739 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:22.739 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:22.739 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:20:22.739 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:22.739 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:22.739 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:22.739 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:20:22.739 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:22.739 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:22.739 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:22.739 1+0 records in 00:20:22.739 1+0 records out 00:20:22.739 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000581584 s, 7.0 MB/s 00:20:22.739 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:22.739 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:20:22.739 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:22.739 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:22.739 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:20:22.739 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:22.739 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:20:22.739 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@629 -- # '[' raid1 = raid5f ']' 00:20:22.739 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@633 -- # write_unit_size=1 00:20:22.739 14:19:59 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@635 -- # dd if=/dev/urandom of=/dev/nbd0 bs=4096 count=7936 oflag=direct 00:20:23.676 7936+0 records in 00:20:23.676 7936+0 records out 00:20:23.676 32505856 bytes (33 MB, 31 MiB) copied, 0.896553 s, 36.3 MB/s 00:20:23.676 14:20:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@636 -- # nbd_stop_disks /var/tmp/spdk.sock /dev/nbd0 00:20:23.676 14:20:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:23.676 14:20:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:20:23.676 14:20:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:23.676 14:20:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:20:23.676 14:20:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:23.676 14:20:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:23.935 [2024-11-27 14:20:00.957281] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:23.935 14:20:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:23.935 14:20:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:23.935 14:20:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:23.935 14:20:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:23.935 14:20:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:23.935 14:20:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:23.935 14:20:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:20:23.935 14:20:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:20:23.935 14:20:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:23.935 14:20:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.935 14:20:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:23.935 [2024-11-27 14:20:00.973522] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:23.935 14:20:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.935 14:20:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:23.935 14:20:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:23.935 14:20:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:23.935 14:20:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:23.935 14:20:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:23.935 14:20:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:23.935 14:20:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:23.935 14:20:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:23.935 14:20:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:23.935 14:20:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:23.935 14:20:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:23.935 14:20:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.935 14:20:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:23.935 14:20:00 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:23.935 14:20:00 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.935 14:20:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:23.935 "name": "raid_bdev1", 00:20:23.935 "uuid": "100785a4-ed5b-4a9d-9cbe-1d353c3d979b", 00:20:23.935 "strip_size_kb": 0, 00:20:23.935 "state": "online", 00:20:23.935 "raid_level": "raid1", 00:20:23.935 "superblock": true, 00:20:23.935 "num_base_bdevs": 2, 00:20:23.935 "num_base_bdevs_discovered": 1, 00:20:23.936 "num_base_bdevs_operational": 1, 00:20:23.936 "base_bdevs_list": [ 00:20:23.936 { 00:20:23.936 "name": null, 00:20:23.936 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:23.936 "is_configured": false, 00:20:23.936 "data_offset": 0, 00:20:23.936 "data_size": 7936 00:20:23.936 }, 00:20:23.936 { 00:20:23.936 "name": "BaseBdev2", 00:20:23.936 "uuid": "ddd5b1d6-97a7-5606-bc21-6bcdf03bae69", 00:20:23.936 "is_configured": true, 00:20:23.936 "data_offset": 256, 00:20:23.936 "data_size": 7936 00:20:23.936 } 00:20:23.936 ] 00:20:23.936 }' 00:20:23.936 14:20:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:23.936 14:20:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:24.503 14:20:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:24.503 14:20:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.503 14:20:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:24.503 [2024-11-27 14:20:01.505729] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:24.503 [2024-11-27 14:20:01.518362] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d260 00:20:24.503 14:20:01 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.503 14:20:01 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:24.503 [2024-11-27 14:20:01.520841] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:25.457 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:25.457 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:25.457 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:25.457 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:25.457 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:25.457 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.458 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.458 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.458 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:25.458 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.458 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:25.458 "name": "raid_bdev1", 00:20:25.458 "uuid": "100785a4-ed5b-4a9d-9cbe-1d353c3d979b", 00:20:25.458 "strip_size_kb": 0, 00:20:25.458 "state": "online", 00:20:25.458 "raid_level": "raid1", 00:20:25.458 "superblock": true, 00:20:25.458 "num_base_bdevs": 2, 00:20:25.458 "num_base_bdevs_discovered": 2, 00:20:25.458 "num_base_bdevs_operational": 2, 00:20:25.458 "process": { 00:20:25.458 "type": "rebuild", 00:20:25.458 "target": "spare", 00:20:25.458 "progress": { 00:20:25.458 "blocks": 2560, 00:20:25.458 "percent": 32 00:20:25.458 } 00:20:25.458 }, 00:20:25.458 "base_bdevs_list": [ 00:20:25.458 { 00:20:25.458 "name": "spare", 00:20:25.458 "uuid": "320cb78a-f460-57c5-9870-e2ded21cedb4", 00:20:25.458 "is_configured": true, 00:20:25.458 "data_offset": 256, 00:20:25.458 "data_size": 7936 00:20:25.458 }, 00:20:25.458 { 00:20:25.458 "name": "BaseBdev2", 00:20:25.458 "uuid": "ddd5b1d6-97a7-5606-bc21-6bcdf03bae69", 00:20:25.458 "is_configured": true, 00:20:25.458 "data_offset": 256, 00:20:25.458 "data_size": 7936 00:20:25.458 } 00:20:25.458 ] 00:20:25.458 }' 00:20:25.458 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:25.458 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:25.458 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:25.458 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:25.458 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:25.458 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.458 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:25.458 [2024-11-27 14:20:02.686740] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:25.458 [2024-11-27 14:20:02.730480] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:25.458 [2024-11-27 14:20:02.730898] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:25.458 [2024-11-27 14:20:02.730931] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:25.458 [2024-11-27 14:20:02.730952] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:25.718 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.718 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:25.718 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:25.718 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:25.718 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:25.718 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:25.718 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:25.718 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:25.718 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:25.718 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:25.718 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:25.718 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:25.718 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.718 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:25.718 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:25.718 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.718 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:25.718 "name": "raid_bdev1", 00:20:25.718 "uuid": "100785a4-ed5b-4a9d-9cbe-1d353c3d979b", 00:20:25.718 "strip_size_kb": 0, 00:20:25.718 "state": "online", 00:20:25.718 "raid_level": "raid1", 00:20:25.718 "superblock": true, 00:20:25.718 "num_base_bdevs": 2, 00:20:25.718 "num_base_bdevs_discovered": 1, 00:20:25.718 "num_base_bdevs_operational": 1, 00:20:25.718 "base_bdevs_list": [ 00:20:25.718 { 00:20:25.718 "name": null, 00:20:25.718 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:25.718 "is_configured": false, 00:20:25.718 "data_offset": 0, 00:20:25.718 "data_size": 7936 00:20:25.718 }, 00:20:25.718 { 00:20:25.718 "name": "BaseBdev2", 00:20:25.718 "uuid": "ddd5b1d6-97a7-5606-bc21-6bcdf03bae69", 00:20:25.718 "is_configured": true, 00:20:25.718 "data_offset": 256, 00:20:25.718 "data_size": 7936 00:20:25.718 } 00:20:25.718 ] 00:20:25.718 }' 00:20:25.718 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:25.718 14:20:02 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:26.286 14:20:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:26.286 14:20:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:26.286 14:20:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:26.286 14:20:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:26.286 14:20:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:26.286 14:20:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:26.286 14:20:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.286 14:20:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:26.286 14:20:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:26.286 14:20:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.286 14:20:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:26.286 "name": "raid_bdev1", 00:20:26.286 "uuid": "100785a4-ed5b-4a9d-9cbe-1d353c3d979b", 00:20:26.286 "strip_size_kb": 0, 00:20:26.286 "state": "online", 00:20:26.286 "raid_level": "raid1", 00:20:26.286 "superblock": true, 00:20:26.286 "num_base_bdevs": 2, 00:20:26.286 "num_base_bdevs_discovered": 1, 00:20:26.286 "num_base_bdevs_operational": 1, 00:20:26.286 "base_bdevs_list": [ 00:20:26.286 { 00:20:26.286 "name": null, 00:20:26.286 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:26.286 "is_configured": false, 00:20:26.286 "data_offset": 0, 00:20:26.286 "data_size": 7936 00:20:26.286 }, 00:20:26.286 { 00:20:26.286 "name": "BaseBdev2", 00:20:26.286 "uuid": "ddd5b1d6-97a7-5606-bc21-6bcdf03bae69", 00:20:26.286 "is_configured": true, 00:20:26.286 "data_offset": 256, 00:20:26.286 "data_size": 7936 00:20:26.286 } 00:20:26.286 ] 00:20:26.286 }' 00:20:26.286 14:20:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:26.286 14:20:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:26.286 14:20:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:26.286 14:20:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:26.286 14:20:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:26.286 14:20:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.286 14:20:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:26.286 [2024-11-27 14:20:03.417480] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:26.286 [2024-11-27 14:20:03.431601] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d00018d330 00:20:26.286 14:20:03 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.286 14:20:03 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:26.286 [2024-11-27 14:20:03.434466] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:27.223 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:27.223 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:27.223 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:27.223 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:27.223 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:27.223 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.223 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.224 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.224 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:27.224 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.224 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:27.224 "name": "raid_bdev1", 00:20:27.224 "uuid": "100785a4-ed5b-4a9d-9cbe-1d353c3d979b", 00:20:27.224 "strip_size_kb": 0, 00:20:27.224 "state": "online", 00:20:27.224 "raid_level": "raid1", 00:20:27.224 "superblock": true, 00:20:27.224 "num_base_bdevs": 2, 00:20:27.224 "num_base_bdevs_discovered": 2, 00:20:27.224 "num_base_bdevs_operational": 2, 00:20:27.224 "process": { 00:20:27.224 "type": "rebuild", 00:20:27.224 "target": "spare", 00:20:27.224 "progress": { 00:20:27.224 "blocks": 2560, 00:20:27.224 "percent": 32 00:20:27.224 } 00:20:27.224 }, 00:20:27.224 "base_bdevs_list": [ 00:20:27.224 { 00:20:27.224 "name": "spare", 00:20:27.224 "uuid": "320cb78a-f460-57c5-9870-e2ded21cedb4", 00:20:27.224 "is_configured": true, 00:20:27.224 "data_offset": 256, 00:20:27.224 "data_size": 7936 00:20:27.224 }, 00:20:27.224 { 00:20:27.224 "name": "BaseBdev2", 00:20:27.224 "uuid": "ddd5b1d6-97a7-5606-bc21-6bcdf03bae69", 00:20:27.224 "is_configured": true, 00:20:27.224 "data_offset": 256, 00:20:27.224 "data_size": 7936 00:20:27.224 } 00:20:27.224 ] 00:20:27.224 }' 00:20:27.224 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:27.483 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:27.483 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:27.483 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:27.483 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:20:27.483 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:20:27.483 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:20:27.483 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:20:27.483 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:20:27.483 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:20:27.483 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@706 -- # local timeout=771 00:20:27.483 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:27.483 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:27.483 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:27.483 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:27.483 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:27.483 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:27.483 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:27.483 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.483 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:27.483 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:27.483 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.483 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:27.483 "name": "raid_bdev1", 00:20:27.483 "uuid": "100785a4-ed5b-4a9d-9cbe-1d353c3d979b", 00:20:27.483 "strip_size_kb": 0, 00:20:27.483 "state": "online", 00:20:27.483 "raid_level": "raid1", 00:20:27.483 "superblock": true, 00:20:27.483 "num_base_bdevs": 2, 00:20:27.483 "num_base_bdevs_discovered": 2, 00:20:27.483 "num_base_bdevs_operational": 2, 00:20:27.483 "process": { 00:20:27.483 "type": "rebuild", 00:20:27.483 "target": "spare", 00:20:27.483 "progress": { 00:20:27.483 "blocks": 2816, 00:20:27.483 "percent": 35 00:20:27.483 } 00:20:27.483 }, 00:20:27.483 "base_bdevs_list": [ 00:20:27.483 { 00:20:27.483 "name": "spare", 00:20:27.483 "uuid": "320cb78a-f460-57c5-9870-e2ded21cedb4", 00:20:27.483 "is_configured": true, 00:20:27.483 "data_offset": 256, 00:20:27.483 "data_size": 7936 00:20:27.483 }, 00:20:27.483 { 00:20:27.483 "name": "BaseBdev2", 00:20:27.483 "uuid": "ddd5b1d6-97a7-5606-bc21-6bcdf03bae69", 00:20:27.483 "is_configured": true, 00:20:27.483 "data_offset": 256, 00:20:27.483 "data_size": 7936 00:20:27.483 } 00:20:27.483 ] 00:20:27.483 }' 00:20:27.483 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:27.483 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:27.483 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:27.483 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:27.483 14:20:04 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:28.862 14:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:28.862 14:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:28.862 14:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:28.862 14:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:28.862 14:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:28.862 14:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:28.862 14:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:28.862 14:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:28.863 14:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.863 14:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:28.863 14:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.863 14:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:28.863 "name": "raid_bdev1", 00:20:28.863 "uuid": "100785a4-ed5b-4a9d-9cbe-1d353c3d979b", 00:20:28.863 "strip_size_kb": 0, 00:20:28.863 "state": "online", 00:20:28.863 "raid_level": "raid1", 00:20:28.863 "superblock": true, 00:20:28.863 "num_base_bdevs": 2, 00:20:28.863 "num_base_bdevs_discovered": 2, 00:20:28.863 "num_base_bdevs_operational": 2, 00:20:28.863 "process": { 00:20:28.863 "type": "rebuild", 00:20:28.863 "target": "spare", 00:20:28.863 "progress": { 00:20:28.863 "blocks": 5632, 00:20:28.863 "percent": 70 00:20:28.863 } 00:20:28.863 }, 00:20:28.863 "base_bdevs_list": [ 00:20:28.863 { 00:20:28.863 "name": "spare", 00:20:28.863 "uuid": "320cb78a-f460-57c5-9870-e2ded21cedb4", 00:20:28.863 "is_configured": true, 00:20:28.863 "data_offset": 256, 00:20:28.863 "data_size": 7936 00:20:28.863 }, 00:20:28.863 { 00:20:28.863 "name": "BaseBdev2", 00:20:28.863 "uuid": "ddd5b1d6-97a7-5606-bc21-6bcdf03bae69", 00:20:28.863 "is_configured": true, 00:20:28.863 "data_offset": 256, 00:20:28.863 "data_size": 7936 00:20:28.863 } 00:20:28.863 ] 00:20:28.863 }' 00:20:28.863 14:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:28.863 14:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:28.863 14:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:28.863 14:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:28.863 14:20:05 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@711 -- # sleep 1 00:20:29.430 [2024-11-27 14:20:06.558055] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:20:29.430 [2024-11-27 14:20:06.558194] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:20:29.430 [2024-11-27 14:20:06.558384] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:29.689 14:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:20:29.689 14:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:29.689 14:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:29.689 14:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:29.689 14:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:29.689 14:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:29.689 14:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.689 14:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.689 14:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:29.689 14:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.689 14:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.689 14:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:29.689 "name": "raid_bdev1", 00:20:29.689 "uuid": "100785a4-ed5b-4a9d-9cbe-1d353c3d979b", 00:20:29.689 "strip_size_kb": 0, 00:20:29.689 "state": "online", 00:20:29.689 "raid_level": "raid1", 00:20:29.689 "superblock": true, 00:20:29.689 "num_base_bdevs": 2, 00:20:29.689 "num_base_bdevs_discovered": 2, 00:20:29.689 "num_base_bdevs_operational": 2, 00:20:29.689 "base_bdevs_list": [ 00:20:29.689 { 00:20:29.689 "name": "spare", 00:20:29.689 "uuid": "320cb78a-f460-57c5-9870-e2ded21cedb4", 00:20:29.689 "is_configured": true, 00:20:29.689 "data_offset": 256, 00:20:29.689 "data_size": 7936 00:20:29.689 }, 00:20:29.689 { 00:20:29.689 "name": "BaseBdev2", 00:20:29.689 "uuid": "ddd5b1d6-97a7-5606-bc21-6bcdf03bae69", 00:20:29.689 "is_configured": true, 00:20:29.689 "data_offset": 256, 00:20:29.689 "data_size": 7936 00:20:29.689 } 00:20:29.689 ] 00:20:29.689 }' 00:20:29.689 14:20:06 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:29.947 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:20:29.947 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:29.947 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:20:29.947 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@709 -- # break 00:20:29.947 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:29.947 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:29.947 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:29.947 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:29.947 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:29.947 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:29.947 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:29.947 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.947 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:29.947 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.947 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:29.947 "name": "raid_bdev1", 00:20:29.947 "uuid": "100785a4-ed5b-4a9d-9cbe-1d353c3d979b", 00:20:29.947 "strip_size_kb": 0, 00:20:29.947 "state": "online", 00:20:29.947 "raid_level": "raid1", 00:20:29.947 "superblock": true, 00:20:29.947 "num_base_bdevs": 2, 00:20:29.947 "num_base_bdevs_discovered": 2, 00:20:29.947 "num_base_bdevs_operational": 2, 00:20:29.947 "base_bdevs_list": [ 00:20:29.947 { 00:20:29.948 "name": "spare", 00:20:29.948 "uuid": "320cb78a-f460-57c5-9870-e2ded21cedb4", 00:20:29.948 "is_configured": true, 00:20:29.948 "data_offset": 256, 00:20:29.948 "data_size": 7936 00:20:29.948 }, 00:20:29.948 { 00:20:29.948 "name": "BaseBdev2", 00:20:29.948 "uuid": "ddd5b1d6-97a7-5606-bc21-6bcdf03bae69", 00:20:29.948 "is_configured": true, 00:20:29.948 "data_offset": 256, 00:20:29.948 "data_size": 7936 00:20:29.948 } 00:20:29.948 ] 00:20:29.948 }' 00:20:29.948 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:29.948 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:29.948 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:30.207 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:30.207 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:30.207 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:30.207 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:30.207 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:30.207 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:30.207 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:30.207 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:30.207 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:30.207 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:30.207 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:30.207 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.207 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:30.207 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.207 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.207 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.207 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:30.207 "name": "raid_bdev1", 00:20:30.207 "uuid": "100785a4-ed5b-4a9d-9cbe-1d353c3d979b", 00:20:30.207 "strip_size_kb": 0, 00:20:30.207 "state": "online", 00:20:30.207 "raid_level": "raid1", 00:20:30.207 "superblock": true, 00:20:30.207 "num_base_bdevs": 2, 00:20:30.207 "num_base_bdevs_discovered": 2, 00:20:30.207 "num_base_bdevs_operational": 2, 00:20:30.207 "base_bdevs_list": [ 00:20:30.207 { 00:20:30.207 "name": "spare", 00:20:30.207 "uuid": "320cb78a-f460-57c5-9870-e2ded21cedb4", 00:20:30.207 "is_configured": true, 00:20:30.207 "data_offset": 256, 00:20:30.207 "data_size": 7936 00:20:30.207 }, 00:20:30.207 { 00:20:30.207 "name": "BaseBdev2", 00:20:30.207 "uuid": "ddd5b1d6-97a7-5606-bc21-6bcdf03bae69", 00:20:30.207 "is_configured": true, 00:20:30.207 "data_offset": 256, 00:20:30.207 "data_size": 7936 00:20:30.207 } 00:20:30.207 ] 00:20:30.207 }' 00:20:30.207 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:30.207 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.466 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:30.466 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.466 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.466 [2024-11-27 14:20:07.737586] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:30.466 [2024-11-27 14:20:07.737650] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:30.466 [2024-11-27 14:20:07.737769] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:30.466 [2024-11-27 14:20:07.737906] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:30.466 [2024-11-27 14:20:07.737924] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:30.466 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.725 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:30.725 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # jq length 00:20:30.725 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.725 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:30.725 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.725 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:20:30.725 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@722 -- # '[' true = true ']' 00:20:30.725 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@723 -- # '[' false = true ']' 00:20:30.725 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@737 -- # nbd_start_disks /var/tmp/spdk.sock 'BaseBdev1 spare' '/dev/nbd0 /dev/nbd1' 00:20:30.725 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk.sock 00:20:30.725 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # bdev_list=('BaseBdev1' 'spare') 00:20:30.725 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@10 -- # local bdev_list 00:20:30.725 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:30.725 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@11 -- # local nbd_list 00:20:30.725 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@12 -- # local i 00:20:30.725 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:20:30.725 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:30.725 14:20:07 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk BaseBdev1 /dev/nbd0 00:20:30.984 /dev/nbd0 00:20:30.984 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:20:30.984 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:20:30.984 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:20:30.984 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:20:30.984 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:30.984 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:30.984 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:20:30.984 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:20:30.984 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:30.984 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:30.984 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:30.984 1+0 records in 00:20:30.984 1+0 records out 00:20:30.984 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000831168 s, 4.9 MB/s 00:20:30.984 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:30.984 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:20:30.984 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:30.984 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:30.984 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:20:30.984 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:30.984 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:30.984 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_start_disk spare /dev/nbd1 00:20:31.243 /dev/nbd1 00:20:31.243 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:20:31.243 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:20:31.244 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:20:31.244 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@873 -- # local i 00:20:31.244 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:20:31.244 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:20:31.244 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:20:31.244 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@877 -- # break 00:20:31.244 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:20:31.244 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:20:31.244 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:20:31.244 1+0 records in 00:20:31.244 1+0 records out 00:20:31.244 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000361246 s, 11.3 MB/s 00:20:31.244 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:31.244 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@890 -- # size=4096 00:20:31.244 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:20:31.244 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:20:31.244 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@893 -- # return 0 00:20:31.244 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:20:31.244 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:20:31.244 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@738 -- # cmp -i 1048576 /dev/nbd0 /dev/nbd1 00:20:31.502 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@739 -- # nbd_stop_disks /var/tmp/spdk.sock '/dev/nbd0 /dev/nbd1' 00:20:31.502 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk.sock 00:20:31.502 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:20:31.502 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@50 -- # local nbd_list 00:20:31.502 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@51 -- # local i 00:20:31.502 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:31.502 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd0 00:20:31.760 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:20:31.760 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:20:31.760 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:20:31.760 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:31.760 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:31.760 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:20:31.760 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:20:31.761 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:20:31.761 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:20:31.761 14:20:08 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.sock nbd_stop_disk /dev/nbd1 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@41 -- # break 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/nbd_common.sh@45 -- # return 0 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.327 [2024-11-27 14:20:09.329553] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:32.327 [2024-11-27 14:20:09.329821] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:32.327 [2024-11-27 14:20:09.329868] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a280 00:20:32.327 [2024-11-27 14:20:09.329885] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:32.327 [2024-11-27 14:20:09.332727] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:32.327 [2024-11-27 14:20:09.332802] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:32.327 [2024-11-27 14:20:09.332892] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:32.327 [2024-11-27 14:20:09.332957] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:32.327 [2024-11-27 14:20:09.333155] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:32.327 spare 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.327 [2024-11-27 14:20:09.433306] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:20:32.327 [2024-11-27 14:20:09.433580] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:20:32.327 [2024-11-27 14:20:09.433783] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1b50 00:20:32.327 [2024-11-27 14:20:09.434160] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:20:32.327 [2024-11-27 14:20:09.434310] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:20:32.327 [2024-11-27 14:20:09.434643] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.327 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:32.327 "name": "raid_bdev1", 00:20:32.327 "uuid": "100785a4-ed5b-4a9d-9cbe-1d353c3d979b", 00:20:32.327 "strip_size_kb": 0, 00:20:32.327 "state": "online", 00:20:32.327 "raid_level": "raid1", 00:20:32.327 "superblock": true, 00:20:32.327 "num_base_bdevs": 2, 00:20:32.327 "num_base_bdevs_discovered": 2, 00:20:32.327 "num_base_bdevs_operational": 2, 00:20:32.327 "base_bdevs_list": [ 00:20:32.327 { 00:20:32.327 "name": "spare", 00:20:32.327 "uuid": "320cb78a-f460-57c5-9870-e2ded21cedb4", 00:20:32.327 "is_configured": true, 00:20:32.327 "data_offset": 256, 00:20:32.327 "data_size": 7936 00:20:32.327 }, 00:20:32.327 { 00:20:32.327 "name": "BaseBdev2", 00:20:32.327 "uuid": "ddd5b1d6-97a7-5606-bc21-6bcdf03bae69", 00:20:32.327 "is_configured": true, 00:20:32.327 "data_offset": 256, 00:20:32.327 "data_size": 7936 00:20:32.327 } 00:20:32.327 ] 00:20:32.328 }' 00:20:32.328 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:32.328 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.894 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:32.894 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:32.894 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:32.894 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:32.894 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:32.894 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.894 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.894 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.894 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.894 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.894 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:32.894 "name": "raid_bdev1", 00:20:32.894 "uuid": "100785a4-ed5b-4a9d-9cbe-1d353c3d979b", 00:20:32.894 "strip_size_kb": 0, 00:20:32.894 "state": "online", 00:20:32.894 "raid_level": "raid1", 00:20:32.894 "superblock": true, 00:20:32.894 "num_base_bdevs": 2, 00:20:32.894 "num_base_bdevs_discovered": 2, 00:20:32.894 "num_base_bdevs_operational": 2, 00:20:32.894 "base_bdevs_list": [ 00:20:32.894 { 00:20:32.894 "name": "spare", 00:20:32.894 "uuid": "320cb78a-f460-57c5-9870-e2ded21cedb4", 00:20:32.894 "is_configured": true, 00:20:32.894 "data_offset": 256, 00:20:32.894 "data_size": 7936 00:20:32.894 }, 00:20:32.894 { 00:20:32.894 "name": "BaseBdev2", 00:20:32.894 "uuid": "ddd5b1d6-97a7-5606-bc21-6bcdf03bae69", 00:20:32.894 "is_configured": true, 00:20:32.894 "data_offset": 256, 00:20:32.894 "data_size": 7936 00:20:32.894 } 00:20:32.894 ] 00:20:32.894 }' 00:20:32.894 14:20:09 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:32.894 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:32.894 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:32.894 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:32.894 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.894 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:20:32.894 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.894 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.895 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.895 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:20:32.895 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:32.895 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.895 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.895 [2024-11-27 14:20:10.142882] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:32.895 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.895 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:32.895 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:32.895 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:32.895 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:32.895 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:32.895 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:32.895 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:32.895 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:32.895 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:32.895 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:32.895 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:32.895 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:32.895 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.895 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:32.895 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.154 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:33.154 "name": "raid_bdev1", 00:20:33.154 "uuid": "100785a4-ed5b-4a9d-9cbe-1d353c3d979b", 00:20:33.154 "strip_size_kb": 0, 00:20:33.154 "state": "online", 00:20:33.154 "raid_level": "raid1", 00:20:33.154 "superblock": true, 00:20:33.154 "num_base_bdevs": 2, 00:20:33.154 "num_base_bdevs_discovered": 1, 00:20:33.154 "num_base_bdevs_operational": 1, 00:20:33.154 "base_bdevs_list": [ 00:20:33.154 { 00:20:33.154 "name": null, 00:20:33.154 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:33.154 "is_configured": false, 00:20:33.154 "data_offset": 0, 00:20:33.154 "data_size": 7936 00:20:33.154 }, 00:20:33.154 { 00:20:33.154 "name": "BaseBdev2", 00:20:33.154 "uuid": "ddd5b1d6-97a7-5606-bc21-6bcdf03bae69", 00:20:33.154 "is_configured": true, 00:20:33.154 "data_offset": 256, 00:20:33.154 "data_size": 7936 00:20:33.154 } 00:20:33.154 ] 00:20:33.154 }' 00:20:33.154 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:33.154 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:33.413 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:33.413 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.413 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:33.413 [2024-11-27 14:20:10.659090] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:33.413 [2024-11-27 14:20:10.659376] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:33.413 [2024-11-27 14:20:10.659401] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:33.413 [2024-11-27 14:20:10.659463] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:33.413 [2024-11-27 14:20:10.672536] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1c20 00:20:33.413 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.413 14:20:10 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@757 -- # sleep 1 00:20:33.413 [2024-11-27 14:20:10.675265] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:34.813 "name": "raid_bdev1", 00:20:34.813 "uuid": "100785a4-ed5b-4a9d-9cbe-1d353c3d979b", 00:20:34.813 "strip_size_kb": 0, 00:20:34.813 "state": "online", 00:20:34.813 "raid_level": "raid1", 00:20:34.813 "superblock": true, 00:20:34.813 "num_base_bdevs": 2, 00:20:34.813 "num_base_bdevs_discovered": 2, 00:20:34.813 "num_base_bdevs_operational": 2, 00:20:34.813 "process": { 00:20:34.813 "type": "rebuild", 00:20:34.813 "target": "spare", 00:20:34.813 "progress": { 00:20:34.813 "blocks": 2560, 00:20:34.813 "percent": 32 00:20:34.813 } 00:20:34.813 }, 00:20:34.813 "base_bdevs_list": [ 00:20:34.813 { 00:20:34.813 "name": "spare", 00:20:34.813 "uuid": "320cb78a-f460-57c5-9870-e2ded21cedb4", 00:20:34.813 "is_configured": true, 00:20:34.813 "data_offset": 256, 00:20:34.813 "data_size": 7936 00:20:34.813 }, 00:20:34.813 { 00:20:34.813 "name": "BaseBdev2", 00:20:34.813 "uuid": "ddd5b1d6-97a7-5606-bc21-6bcdf03bae69", 00:20:34.813 "is_configured": true, 00:20:34.813 "data_offset": 256, 00:20:34.813 "data_size": 7936 00:20:34.813 } 00:20:34.813 ] 00:20:34.813 }' 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:34.813 [2024-11-27 14:20:11.849016] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:34.813 [2024-11-27 14:20:11.884484] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:34.813 [2024-11-27 14:20:11.884907] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:34.813 [2024-11-27 14:20:11.884937] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:34.813 [2024-11-27 14:20:11.884967] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:34.813 "name": "raid_bdev1", 00:20:34.813 "uuid": "100785a4-ed5b-4a9d-9cbe-1d353c3d979b", 00:20:34.813 "strip_size_kb": 0, 00:20:34.813 "state": "online", 00:20:34.813 "raid_level": "raid1", 00:20:34.813 "superblock": true, 00:20:34.813 "num_base_bdevs": 2, 00:20:34.813 "num_base_bdevs_discovered": 1, 00:20:34.813 "num_base_bdevs_operational": 1, 00:20:34.813 "base_bdevs_list": [ 00:20:34.813 { 00:20:34.813 "name": null, 00:20:34.813 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:34.813 "is_configured": false, 00:20:34.813 "data_offset": 0, 00:20:34.813 "data_size": 7936 00:20:34.813 }, 00:20:34.813 { 00:20:34.813 "name": "BaseBdev2", 00:20:34.813 "uuid": "ddd5b1d6-97a7-5606-bc21-6bcdf03bae69", 00:20:34.813 "is_configured": true, 00:20:34.813 "data_offset": 256, 00:20:34.813 "data_size": 7936 00:20:34.813 } 00:20:34.813 ] 00:20:34.813 }' 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:34.813 14:20:11 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:35.382 14:20:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:35.382 14:20:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.382 14:20:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:35.382 [2024-11-27 14:20:12.436421] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:35.382 [2024-11-27 14:20:12.436659] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:35.382 [2024-11-27 14:20:12.436706] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ab80 00:20:35.382 [2024-11-27 14:20:12.436727] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:35.382 [2024-11-27 14:20:12.437057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:35.382 [2024-11-27 14:20:12.437098] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:35.382 [2024-11-27 14:20:12.437191] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:20:35.382 [2024-11-27 14:20:12.437250] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:20:35.382 [2024-11-27 14:20:12.437264] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:20:35.382 [2024-11-27 14:20:12.437299] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:35.382 spare 00:20:35.382 [2024-11-27 14:20:12.450355] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d0001c1cf0 00:20:35.382 14:20:12 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.382 14:20:12 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@764 -- # sleep 1 00:20:35.382 [2024-11-27 14:20:12.452831] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:36.319 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:36.320 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:36.320 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:36.320 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:36.320 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:36.320 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.320 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.320 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.320 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:36.320 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.320 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:36.320 "name": "raid_bdev1", 00:20:36.320 "uuid": "100785a4-ed5b-4a9d-9cbe-1d353c3d979b", 00:20:36.320 "strip_size_kb": 0, 00:20:36.320 "state": "online", 00:20:36.320 "raid_level": "raid1", 00:20:36.320 "superblock": true, 00:20:36.320 "num_base_bdevs": 2, 00:20:36.320 "num_base_bdevs_discovered": 2, 00:20:36.320 "num_base_bdevs_operational": 2, 00:20:36.320 "process": { 00:20:36.320 "type": "rebuild", 00:20:36.320 "target": "spare", 00:20:36.320 "progress": { 00:20:36.320 "blocks": 2560, 00:20:36.320 "percent": 32 00:20:36.320 } 00:20:36.320 }, 00:20:36.320 "base_bdevs_list": [ 00:20:36.320 { 00:20:36.320 "name": "spare", 00:20:36.320 "uuid": "320cb78a-f460-57c5-9870-e2ded21cedb4", 00:20:36.320 "is_configured": true, 00:20:36.320 "data_offset": 256, 00:20:36.320 "data_size": 7936 00:20:36.320 }, 00:20:36.320 { 00:20:36.320 "name": "BaseBdev2", 00:20:36.320 "uuid": "ddd5b1d6-97a7-5606-bc21-6bcdf03bae69", 00:20:36.320 "is_configured": true, 00:20:36.320 "data_offset": 256, 00:20:36.320 "data_size": 7936 00:20:36.320 } 00:20:36.320 ] 00:20:36.320 }' 00:20:36.320 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:36.320 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:36.320 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:36.578 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:36.578 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:20:36.578 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.578 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:36.578 [2024-11-27 14:20:13.626665] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:36.578 [2024-11-27 14:20:13.662296] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:36.578 [2024-11-27 14:20:13.662729] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:36.578 [2024-11-27 14:20:13.663003] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:36.578 [2024-11-27 14:20:13.663057] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:36.578 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.578 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:36.578 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:36.578 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:36.578 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:36.578 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:36.578 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:36.578 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:36.578 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:36.579 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:36.579 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:36.579 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:36.579 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.579 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:36.579 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:36.579 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.579 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:36.579 "name": "raid_bdev1", 00:20:36.579 "uuid": "100785a4-ed5b-4a9d-9cbe-1d353c3d979b", 00:20:36.579 "strip_size_kb": 0, 00:20:36.579 "state": "online", 00:20:36.579 "raid_level": "raid1", 00:20:36.579 "superblock": true, 00:20:36.579 "num_base_bdevs": 2, 00:20:36.579 "num_base_bdevs_discovered": 1, 00:20:36.579 "num_base_bdevs_operational": 1, 00:20:36.579 "base_bdevs_list": [ 00:20:36.579 { 00:20:36.579 "name": null, 00:20:36.579 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:36.579 "is_configured": false, 00:20:36.579 "data_offset": 0, 00:20:36.579 "data_size": 7936 00:20:36.579 }, 00:20:36.579 { 00:20:36.579 "name": "BaseBdev2", 00:20:36.579 "uuid": "ddd5b1d6-97a7-5606-bc21-6bcdf03bae69", 00:20:36.579 "is_configured": true, 00:20:36.579 "data_offset": 256, 00:20:36.579 "data_size": 7936 00:20:36.579 } 00:20:36.579 ] 00:20:36.579 }' 00:20:36.579 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:36.579 14:20:13 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.146 14:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:37.146 14:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:37.146 14:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:37.146 14:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:37.146 14:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:37.146 14:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:37.146 14:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.146 14:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:37.146 14:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.146 14:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.146 14:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:37.146 "name": "raid_bdev1", 00:20:37.146 "uuid": "100785a4-ed5b-4a9d-9cbe-1d353c3d979b", 00:20:37.146 "strip_size_kb": 0, 00:20:37.146 "state": "online", 00:20:37.146 "raid_level": "raid1", 00:20:37.146 "superblock": true, 00:20:37.146 "num_base_bdevs": 2, 00:20:37.146 "num_base_bdevs_discovered": 1, 00:20:37.146 "num_base_bdevs_operational": 1, 00:20:37.146 "base_bdevs_list": [ 00:20:37.146 { 00:20:37.146 "name": null, 00:20:37.146 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:37.146 "is_configured": false, 00:20:37.146 "data_offset": 0, 00:20:37.146 "data_size": 7936 00:20:37.146 }, 00:20:37.146 { 00:20:37.146 "name": "BaseBdev2", 00:20:37.146 "uuid": "ddd5b1d6-97a7-5606-bc21-6bcdf03bae69", 00:20:37.146 "is_configured": true, 00:20:37.146 "data_offset": 256, 00:20:37.146 "data_size": 7936 00:20:37.146 } 00:20:37.146 ] 00:20:37.146 }' 00:20:37.146 14:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:37.146 14:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:37.146 14:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:37.146 14:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:37.146 14:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:20:37.146 14:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.146 14:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.146 14:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.146 14:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:37.146 14:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.146 14:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:37.146 [2024-11-27 14:20:14.381835] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:37.146 [2024-11-27 14:20:14.381925] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:37.146 [2024-11-27 14:20:14.381959] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000b180 00:20:37.146 [2024-11-27 14:20:14.381974] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:37.146 [2024-11-27 14:20:14.382260] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:37.146 [2024-11-27 14:20:14.382282] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:37.146 [2024-11-27 14:20:14.382347] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:20:37.146 [2024-11-27 14:20:14.382367] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:37.146 [2024-11-27 14:20:14.382381] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:37.146 [2024-11-27 14:20:14.382408] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:20:37.146 BaseBdev1 00:20:37.147 14:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.147 14:20:14 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@775 -- # sleep 1 00:20:38.526 14:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:38.526 14:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:38.526 14:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:38.526 14:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:38.526 14:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:38.526 14:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:38.526 14:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:38.526 14:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:38.526 14:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:38.526 14:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:38.526 14:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.526 14:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.526 14:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.526 14:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:38.526 14:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.526 14:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:38.526 "name": "raid_bdev1", 00:20:38.526 "uuid": "100785a4-ed5b-4a9d-9cbe-1d353c3d979b", 00:20:38.526 "strip_size_kb": 0, 00:20:38.526 "state": "online", 00:20:38.526 "raid_level": "raid1", 00:20:38.526 "superblock": true, 00:20:38.526 "num_base_bdevs": 2, 00:20:38.526 "num_base_bdevs_discovered": 1, 00:20:38.526 "num_base_bdevs_operational": 1, 00:20:38.526 "base_bdevs_list": [ 00:20:38.526 { 00:20:38.526 "name": null, 00:20:38.526 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.526 "is_configured": false, 00:20:38.526 "data_offset": 0, 00:20:38.526 "data_size": 7936 00:20:38.526 }, 00:20:38.526 { 00:20:38.526 "name": "BaseBdev2", 00:20:38.526 "uuid": "ddd5b1d6-97a7-5606-bc21-6bcdf03bae69", 00:20:38.526 "is_configured": true, 00:20:38.526 "data_offset": 256, 00:20:38.526 "data_size": 7936 00:20:38.526 } 00:20:38.526 ] 00:20:38.526 }' 00:20:38.526 14:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:38.526 14:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:38.785 14:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:38.785 14:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:38.785 14:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:38.785 14:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:38.785 14:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:38.785 14:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:38.785 14:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.785 14:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:38.785 14:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:38.785 14:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.785 14:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:38.785 "name": "raid_bdev1", 00:20:38.785 "uuid": "100785a4-ed5b-4a9d-9cbe-1d353c3d979b", 00:20:38.785 "strip_size_kb": 0, 00:20:38.785 "state": "online", 00:20:38.785 "raid_level": "raid1", 00:20:38.785 "superblock": true, 00:20:38.785 "num_base_bdevs": 2, 00:20:38.785 "num_base_bdevs_discovered": 1, 00:20:38.785 "num_base_bdevs_operational": 1, 00:20:38.785 "base_bdevs_list": [ 00:20:38.785 { 00:20:38.785 "name": null, 00:20:38.785 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:38.785 "is_configured": false, 00:20:38.785 "data_offset": 0, 00:20:38.785 "data_size": 7936 00:20:38.785 }, 00:20:38.785 { 00:20:38.785 "name": "BaseBdev2", 00:20:38.785 "uuid": "ddd5b1d6-97a7-5606-bc21-6bcdf03bae69", 00:20:38.785 "is_configured": true, 00:20:38.785 "data_offset": 256, 00:20:38.785 "data_size": 7936 00:20:38.785 } 00:20:38.785 ] 00:20:38.785 }' 00:20:38.785 14:20:15 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:38.785 14:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:38.785 14:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:39.044 14:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:39.044 14:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:39.044 14:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@652 -- # local es=0 00:20:39.044 14:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:39.044 14:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:39.044 14:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:39.044 14:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:39.044 14:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:39.044 14:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:20:39.044 14:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.044 14:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:39.044 [2024-11-27 14:20:16.082495] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:39.044 [2024-11-27 14:20:16.082856] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:20:39.044 [2024-11-27 14:20:16.083011] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:20:39.044 request: 00:20:39.044 { 00:20:39.044 "base_bdev": "BaseBdev1", 00:20:39.044 "raid_bdev": "raid_bdev1", 00:20:39.044 "method": "bdev_raid_add_base_bdev", 00:20:39.044 "req_id": 1 00:20:39.044 } 00:20:39.044 Got JSON-RPC error response 00:20:39.044 response: 00:20:39.044 { 00:20:39.044 "code": -22, 00:20:39.044 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:20:39.044 } 00:20:39.044 14:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:39.044 14:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@655 -- # es=1 00:20:39.045 14:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:39.045 14:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:39.045 14:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:39.045 14:20:16 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@779 -- # sleep 1 00:20:40.010 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:40.010 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:40.010 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:40.010 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:40.010 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:40.010 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:40.010 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:40.010 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:40.010 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:40.010 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:40.010 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.010 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.010 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.010 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:40.010 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.010 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:40.010 "name": "raid_bdev1", 00:20:40.010 "uuid": "100785a4-ed5b-4a9d-9cbe-1d353c3d979b", 00:20:40.010 "strip_size_kb": 0, 00:20:40.010 "state": "online", 00:20:40.010 "raid_level": "raid1", 00:20:40.010 "superblock": true, 00:20:40.010 "num_base_bdevs": 2, 00:20:40.010 "num_base_bdevs_discovered": 1, 00:20:40.010 "num_base_bdevs_operational": 1, 00:20:40.010 "base_bdevs_list": [ 00:20:40.010 { 00:20:40.010 "name": null, 00:20:40.010 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.010 "is_configured": false, 00:20:40.010 "data_offset": 0, 00:20:40.010 "data_size": 7936 00:20:40.010 }, 00:20:40.010 { 00:20:40.010 "name": "BaseBdev2", 00:20:40.010 "uuid": "ddd5b1d6-97a7-5606-bc21-6bcdf03bae69", 00:20:40.010 "is_configured": true, 00:20:40.010 "data_offset": 256, 00:20:40.010 "data_size": 7936 00:20:40.010 } 00:20:40.010 ] 00:20:40.010 }' 00:20:40.010 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:40.010 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:40.578 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:40.578 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:40.578 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:40.578 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:40.578 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:40.578 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:40.578 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:40.578 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.578 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:40.578 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.578 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:40.578 "name": "raid_bdev1", 00:20:40.578 "uuid": "100785a4-ed5b-4a9d-9cbe-1d353c3d979b", 00:20:40.578 "strip_size_kb": 0, 00:20:40.578 "state": "online", 00:20:40.578 "raid_level": "raid1", 00:20:40.578 "superblock": true, 00:20:40.578 "num_base_bdevs": 2, 00:20:40.578 "num_base_bdevs_discovered": 1, 00:20:40.578 "num_base_bdevs_operational": 1, 00:20:40.578 "base_bdevs_list": [ 00:20:40.578 { 00:20:40.578 "name": null, 00:20:40.578 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:40.578 "is_configured": false, 00:20:40.578 "data_offset": 0, 00:20:40.578 "data_size": 7936 00:20:40.578 }, 00:20:40.578 { 00:20:40.578 "name": "BaseBdev2", 00:20:40.578 "uuid": "ddd5b1d6-97a7-5606-bc21-6bcdf03bae69", 00:20:40.578 "is_configured": true, 00:20:40.578 "data_offset": 256, 00:20:40.578 "data_size": 7936 00:20:40.578 } 00:20:40.578 ] 00:20:40.578 }' 00:20:40.578 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:40.578 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:40.578 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:40.578 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:40.578 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@784 -- # killprocess 88120 00:20:40.578 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@954 -- # '[' -z 88120 ']' 00:20:40.578 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@958 -- # kill -0 88120 00:20:40.578 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # uname 00:20:40.578 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:40.578 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88120 00:20:40.578 killing process with pid 88120 00:20:40.578 Received shutdown signal, test time was about 60.000000 seconds 00:20:40.578 00:20:40.578 Latency(us) 00:20:40.578 [2024-11-27T14:20:17.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:40.578 [2024-11-27T14:20:17.856Z] =================================================================================================================== 00:20:40.578 [2024-11-27T14:20:17.856Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:20:40.578 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:40.578 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:40.578 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88120' 00:20:40.578 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@973 -- # kill 88120 00:20:40.578 [2024-11-27 14:20:17.817527] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:40.578 14:20:17 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@978 -- # wait 88120 00:20:40.578 [2024-11-27 14:20:17.817682] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:40.578 [2024-11-27 14:20:17.817741] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:40.579 [2024-11-27 14:20:17.817758] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:20:40.838 [2024-11-27 14:20:18.093270] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:42.213 14:20:19 bdev_raid.raid_rebuild_test_sb_md_separate -- bdev/bdev_raid.sh@786 -- # return 0 00:20:42.213 00:20:42.213 real 0m21.608s 00:20:42.213 user 0m29.375s 00:20:42.213 sys 0m2.467s 00:20:42.213 14:20:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:42.213 ************************************ 00:20:42.213 END TEST raid_rebuild_test_sb_md_separate 00:20:42.213 ************************************ 00:20:42.213 14:20:19 bdev_raid.raid_rebuild_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:20:42.213 14:20:19 bdev_raid -- bdev/bdev_raid.sh@1010 -- # base_malloc_params='-m 32 -i' 00:20:42.213 14:20:19 bdev_raid -- bdev/bdev_raid.sh@1011 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:20:42.213 14:20:19 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:42.213 14:20:19 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:42.213 14:20:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:42.213 ************************************ 00:20:42.213 START TEST raid_state_function_test_sb_md_interleaved 00:20:42.213 ************************************ 00:20:42.213 14:20:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_state_function_test raid1 2 true 00:20:42.213 14:20:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # local raid_level=raid1 00:20:42.213 14:20:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # local num_base_bdevs=2 00:20:42.213 14:20:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # local superblock=true 00:20:42.213 14:20:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # local raid_bdev 00:20:42.213 14:20:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i = 1 )) 00:20:42.213 14:20:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:42.213 14:20:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev1 00:20:42.213 14:20:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:42.213 14:20:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:42.213 14:20:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # echo BaseBdev2 00:20:42.213 14:20:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i++ )) 00:20:42.213 14:20:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # (( i <= num_base_bdevs )) 00:20:42.213 14:20:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:42.213 14:20:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@209 -- # local base_bdevs 00:20:42.213 14:20:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@210 -- # local raid_bdev_name=Existed_Raid 00:20:42.213 14:20:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@211 -- # local strip_size 00:20:42.213 14:20:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@212 -- # local strip_size_create_arg 00:20:42.213 14:20:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # local superblock_create_arg 00:20:42.213 14:20:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@215 -- # '[' raid1 '!=' raid1 ']' 00:20:42.213 14:20:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@219 -- # strip_size=0 00:20:42.213 14:20:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # '[' true = true ']' 00:20:42.213 14:20:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # superblock_create_arg=-s 00:20:42.213 Process raid pid: 88819 00:20:42.213 14:20:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@229 -- # raid_pid=88819 00:20:42.213 14:20:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # echo 'Process raid pid: 88819' 00:20:42.213 14:20:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@231 -- # waitforlisten 88819 00:20:42.213 14:20:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -i 0 -L bdev_raid 00:20:42.213 14:20:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 88819 ']' 00:20:42.213 14:20:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.213 14:20:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:42.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.213 14:20:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.213 14:20:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:42.213 14:20:19 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:42.213 [2024-11-27 14:20:19.282122] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:20:42.213 [2024-11-27 14:20:19.282641] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.213 [2024-11-27 14:20:19.466652] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:42.472 [2024-11-27 14:20:19.601098] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.730 [2024-11-27 14:20:19.801830] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:42.730 [2024-11-27 14:20:19.801929] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:43.297 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:43.297 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:20:43.297 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@235 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:43.297 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.297 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:43.297 [2024-11-27 14:20:20.286585] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:43.297 [2024-11-27 14:20:20.286876] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:43.297 [2024-11-27 14:20:20.287006] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:43.297 [2024-11-27 14:20:20.287143] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:43.297 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.297 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@236 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:43.297 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:43.297 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:43.297 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:43.298 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:43.298 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:43.298 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:43.298 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:43.298 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:43.298 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:43.298 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.298 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:43.298 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.298 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:43.298 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.298 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:43.298 "name": "Existed_Raid", 00:20:43.298 "uuid": "dcb8280d-9ff4-45fd-9a92-3692af619d0a", 00:20:43.298 "strip_size_kb": 0, 00:20:43.298 "state": "configuring", 00:20:43.298 "raid_level": "raid1", 00:20:43.298 "superblock": true, 00:20:43.298 "num_base_bdevs": 2, 00:20:43.298 "num_base_bdevs_discovered": 0, 00:20:43.298 "num_base_bdevs_operational": 2, 00:20:43.298 "base_bdevs_list": [ 00:20:43.298 { 00:20:43.298 "name": "BaseBdev1", 00:20:43.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.298 "is_configured": false, 00:20:43.298 "data_offset": 0, 00:20:43.298 "data_size": 0 00:20:43.298 }, 00:20:43.298 { 00:20:43.298 "name": "BaseBdev2", 00:20:43.298 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.298 "is_configured": false, 00:20:43.298 "data_offset": 0, 00:20:43.298 "data_size": 0 00:20:43.298 } 00:20:43.298 ] 00:20:43.298 }' 00:20:43.298 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:43.298 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:43.557 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:43.557 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.557 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:43.557 [2024-11-27 14:20:20.806667] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:43.557 [2024-11-27 14:20:20.806707] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name Existed_Raid, state configuring 00:20:43.557 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.557 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@241 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:43.557 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.557 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:43.557 [2024-11-27 14:20:20.814705] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:20:43.557 [2024-11-27 14:20:20.814830] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:20:43.557 [2024-11-27 14:20:20.815045] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:43.557 [2024-11-27 14:20:20.815122] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:43.557 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.557 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@242 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:20:43.557 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.557 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:43.816 [2024-11-27 14:20:20.857717] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:43.816 BaseBdev1 00:20:43.816 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.816 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # waitforbdev BaseBdev1 00:20:43.816 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev1 00:20:43.816 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:43.816 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:20:43.816 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:43.816 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:43.816 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:43.816 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.816 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:43.816 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.816 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 -t 2000 00:20:43.816 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.816 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:43.816 [ 00:20:43.816 { 00:20:43.816 "name": "BaseBdev1", 00:20:43.816 "aliases": [ 00:20:43.816 "46f99428-27c9-49a2-82c9-8bf1e6d2bfab" 00:20:43.816 ], 00:20:43.816 "product_name": "Malloc disk", 00:20:43.816 "block_size": 4128, 00:20:43.816 "num_blocks": 8192, 00:20:43.816 "uuid": "46f99428-27c9-49a2-82c9-8bf1e6d2bfab", 00:20:43.816 "md_size": 32, 00:20:43.816 "md_interleave": true, 00:20:43.816 "dif_type": 0, 00:20:43.816 "assigned_rate_limits": { 00:20:43.816 "rw_ios_per_sec": 0, 00:20:43.816 "rw_mbytes_per_sec": 0, 00:20:43.816 "r_mbytes_per_sec": 0, 00:20:43.816 "w_mbytes_per_sec": 0 00:20:43.816 }, 00:20:43.816 "claimed": true, 00:20:43.816 "claim_type": "exclusive_write", 00:20:43.816 "zoned": false, 00:20:43.816 "supported_io_types": { 00:20:43.816 "read": true, 00:20:43.816 "write": true, 00:20:43.816 "unmap": true, 00:20:43.816 "flush": true, 00:20:43.816 "reset": true, 00:20:43.816 "nvme_admin": false, 00:20:43.816 "nvme_io": false, 00:20:43.816 "nvme_io_md": false, 00:20:43.816 "write_zeroes": true, 00:20:43.816 "zcopy": true, 00:20:43.816 "get_zone_info": false, 00:20:43.816 "zone_management": false, 00:20:43.816 "zone_append": false, 00:20:43.816 "compare": false, 00:20:43.816 "compare_and_write": false, 00:20:43.816 "abort": true, 00:20:43.816 "seek_hole": false, 00:20:43.816 "seek_data": false, 00:20:43.816 "copy": true, 00:20:43.816 "nvme_iov_md": false 00:20:43.816 }, 00:20:43.816 "memory_domains": [ 00:20:43.816 { 00:20:43.816 "dma_device_id": "system", 00:20:43.816 "dma_device_type": 1 00:20:43.816 }, 00:20:43.816 { 00:20:43.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:43.816 "dma_device_type": 2 00:20:43.816 } 00:20:43.816 ], 00:20:43.816 "driver_specific": {} 00:20:43.816 } 00:20:43.816 ] 00:20:43.816 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.816 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:20:43.816 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:43.816 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:43.816 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:43.816 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:43.816 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:43.816 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:43.816 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:43.816 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:43.816 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:43.816 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:43.816 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:43.816 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:43.816 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.816 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:43.816 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.816 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:43.816 "name": "Existed_Raid", 00:20:43.816 "uuid": "6278524b-c3bd-40c8-95c5-7b699715b8b0", 00:20:43.816 "strip_size_kb": 0, 00:20:43.816 "state": "configuring", 00:20:43.816 "raid_level": "raid1", 00:20:43.816 "superblock": true, 00:20:43.816 "num_base_bdevs": 2, 00:20:43.816 "num_base_bdevs_discovered": 1, 00:20:43.816 "num_base_bdevs_operational": 2, 00:20:43.817 "base_bdevs_list": [ 00:20:43.817 { 00:20:43.817 "name": "BaseBdev1", 00:20:43.817 "uuid": "46f99428-27c9-49a2-82c9-8bf1e6d2bfab", 00:20:43.817 "is_configured": true, 00:20:43.817 "data_offset": 256, 00:20:43.817 "data_size": 7936 00:20:43.817 }, 00:20:43.817 { 00:20:43.817 "name": "BaseBdev2", 00:20:43.817 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:43.817 "is_configured": false, 00:20:43.817 "data_offset": 0, 00:20:43.817 "data_size": 0 00:20:43.817 } 00:20:43.817 ] 00:20:43.817 }' 00:20:43.817 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:43.817 14:20:20 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:44.418 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # rpc_cmd bdev_raid_delete Existed_Raid 00:20:44.418 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.418 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:44.418 [2024-11-27 14:20:21.421998] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:20:44.418 [2024-11-27 14:20:21.422068] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name Existed_Raid, state configuring 00:20:44.418 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.418 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@249 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n Existed_Raid 00:20:44.418 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.418 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:44.418 [2024-11-27 14:20:21.430038] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:44.418 [2024-11-27 14:20:21.432687] bdev.c:8666:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:20:44.418 [2024-11-27 14:20:21.432755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:20:44.418 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.418 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i = 1 )) 00:20:44.418 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:44.418 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:20:44.418 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:44.419 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:44.419 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:44.419 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:44.419 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:44.419 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:44.419 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:44.419 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:44.419 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:44.419 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:44.419 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.419 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:44.419 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:44.419 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.419 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:44.419 "name": "Existed_Raid", 00:20:44.419 "uuid": "7198ca0e-463c-4cde-a7b5-3e1a8196a063", 00:20:44.419 "strip_size_kb": 0, 00:20:44.419 "state": "configuring", 00:20:44.419 "raid_level": "raid1", 00:20:44.419 "superblock": true, 00:20:44.419 "num_base_bdevs": 2, 00:20:44.419 "num_base_bdevs_discovered": 1, 00:20:44.419 "num_base_bdevs_operational": 2, 00:20:44.419 "base_bdevs_list": [ 00:20:44.419 { 00:20:44.419 "name": "BaseBdev1", 00:20:44.419 "uuid": "46f99428-27c9-49a2-82c9-8bf1e6d2bfab", 00:20:44.419 "is_configured": true, 00:20:44.419 "data_offset": 256, 00:20:44.419 "data_size": 7936 00:20:44.419 }, 00:20:44.419 { 00:20:44.419 "name": "BaseBdev2", 00:20:44.419 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:44.419 "is_configured": false, 00:20:44.419 "data_offset": 0, 00:20:44.419 "data_size": 0 00:20:44.419 } 00:20:44.419 ] 00:20:44.419 }' 00:20:44.419 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:44.419 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:44.678 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:20:44.678 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.678 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:44.939 [2024-11-27 14:20:21.980638] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:44.939 BaseBdev2 00:20:44.939 [2024-11-27 14:20:21.981152] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:44.939 [2024-11-27 14:20:21.981178] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:44.939 [2024-11-27 14:20:21.981306] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:44.939 [2024-11-27 14:20:21.981400] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:44.939 [2024-11-27 14:20:21.981434] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x617000007e80 00:20:44.939 [2024-11-27 14:20:21.981525] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:44.939 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.939 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@253 -- # waitforbdev BaseBdev2 00:20:44.939 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@903 -- # local bdev_name=BaseBdev2 00:20:44.939 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:20:44.939 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # local i 00:20:44.939 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:20:44.939 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:20:44.939 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@908 -- # rpc_cmd bdev_wait_for_examine 00:20:44.939 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.939 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:44.939 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.939 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@910 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 -t 2000 00:20:44.939 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.939 14:20:21 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:44.939 [ 00:20:44.939 { 00:20:44.939 "name": "BaseBdev2", 00:20:44.939 "aliases": [ 00:20:44.939 "c5a1149a-30a7-4e76-af1e-c342cf23d9e7" 00:20:44.939 ], 00:20:44.939 "product_name": "Malloc disk", 00:20:44.939 "block_size": 4128, 00:20:44.939 "num_blocks": 8192, 00:20:44.939 "uuid": "c5a1149a-30a7-4e76-af1e-c342cf23d9e7", 00:20:44.939 "md_size": 32, 00:20:44.939 "md_interleave": true, 00:20:44.939 "dif_type": 0, 00:20:44.939 "assigned_rate_limits": { 00:20:44.939 "rw_ios_per_sec": 0, 00:20:44.939 "rw_mbytes_per_sec": 0, 00:20:44.939 "r_mbytes_per_sec": 0, 00:20:44.939 "w_mbytes_per_sec": 0 00:20:44.939 }, 00:20:44.939 "claimed": true, 00:20:44.939 "claim_type": "exclusive_write", 00:20:44.939 "zoned": false, 00:20:44.939 "supported_io_types": { 00:20:44.939 "read": true, 00:20:44.939 "write": true, 00:20:44.939 "unmap": true, 00:20:44.939 "flush": true, 00:20:44.939 "reset": true, 00:20:44.939 "nvme_admin": false, 00:20:44.939 "nvme_io": false, 00:20:44.939 "nvme_io_md": false, 00:20:44.939 "write_zeroes": true, 00:20:44.939 "zcopy": true, 00:20:44.939 "get_zone_info": false, 00:20:44.939 "zone_management": false, 00:20:44.939 "zone_append": false, 00:20:44.939 "compare": false, 00:20:44.939 "compare_and_write": false, 00:20:44.939 "abort": true, 00:20:44.939 "seek_hole": false, 00:20:44.939 "seek_data": false, 00:20:44.939 "copy": true, 00:20:44.939 "nvme_iov_md": false 00:20:44.939 }, 00:20:44.939 "memory_domains": [ 00:20:44.939 { 00:20:44.939 "dma_device_id": "system", 00:20:44.939 "dma_device_type": 1 00:20:44.939 }, 00:20:44.939 { 00:20:44.939 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:44.939 "dma_device_type": 2 00:20:44.939 } 00:20:44.939 ], 00:20:44.939 "driver_specific": {} 00:20:44.939 } 00:20:44.939 ] 00:20:44.939 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.939 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@911 -- # return 0 00:20:44.939 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i++ )) 00:20:44.939 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # (( i < num_base_bdevs )) 00:20:44.939 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@255 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:20:44.939 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:44.939 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:44.939 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:44.940 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:44.940 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:44.940 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:44.940 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:44.940 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:44.940 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:44.940 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:44.940 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:44.940 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.940 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:44.940 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.940 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:44.940 "name": "Existed_Raid", 00:20:44.940 "uuid": "7198ca0e-463c-4cde-a7b5-3e1a8196a063", 00:20:44.940 "strip_size_kb": 0, 00:20:44.940 "state": "online", 00:20:44.940 "raid_level": "raid1", 00:20:44.940 "superblock": true, 00:20:44.940 "num_base_bdevs": 2, 00:20:44.940 "num_base_bdevs_discovered": 2, 00:20:44.940 "num_base_bdevs_operational": 2, 00:20:44.940 "base_bdevs_list": [ 00:20:44.940 { 00:20:44.940 "name": "BaseBdev1", 00:20:44.940 "uuid": "46f99428-27c9-49a2-82c9-8bf1e6d2bfab", 00:20:44.940 "is_configured": true, 00:20:44.940 "data_offset": 256, 00:20:44.940 "data_size": 7936 00:20:44.940 }, 00:20:44.940 { 00:20:44.940 "name": "BaseBdev2", 00:20:44.940 "uuid": "c5a1149a-30a7-4e76-af1e-c342cf23d9e7", 00:20:44.940 "is_configured": true, 00:20:44.940 "data_offset": 256, 00:20:44.940 "data_size": 7936 00:20:44.940 } 00:20:44.940 ] 00:20:44.940 }' 00:20:44.940 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:44.940 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:45.510 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # verify_raid_bdev_properties Existed_Raid 00:20:45.510 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=Existed_Raid 00:20:45.510 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:45.510 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:45.510 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:45.510 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:45.510 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b Existed_Raid 00:20:45.510 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:45.510 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.510 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:45.510 [2024-11-27 14:20:22.549314] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:45.510 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.510 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:45.510 "name": "Existed_Raid", 00:20:45.510 "aliases": [ 00:20:45.510 "7198ca0e-463c-4cde-a7b5-3e1a8196a063" 00:20:45.510 ], 00:20:45.510 "product_name": "Raid Volume", 00:20:45.510 "block_size": 4128, 00:20:45.510 "num_blocks": 7936, 00:20:45.510 "uuid": "7198ca0e-463c-4cde-a7b5-3e1a8196a063", 00:20:45.510 "md_size": 32, 00:20:45.510 "md_interleave": true, 00:20:45.510 "dif_type": 0, 00:20:45.511 "assigned_rate_limits": { 00:20:45.511 "rw_ios_per_sec": 0, 00:20:45.511 "rw_mbytes_per_sec": 0, 00:20:45.511 "r_mbytes_per_sec": 0, 00:20:45.511 "w_mbytes_per_sec": 0 00:20:45.511 }, 00:20:45.511 "claimed": false, 00:20:45.511 "zoned": false, 00:20:45.511 "supported_io_types": { 00:20:45.511 "read": true, 00:20:45.511 "write": true, 00:20:45.511 "unmap": false, 00:20:45.511 "flush": false, 00:20:45.511 "reset": true, 00:20:45.511 "nvme_admin": false, 00:20:45.511 "nvme_io": false, 00:20:45.511 "nvme_io_md": false, 00:20:45.511 "write_zeroes": true, 00:20:45.511 "zcopy": false, 00:20:45.511 "get_zone_info": false, 00:20:45.511 "zone_management": false, 00:20:45.511 "zone_append": false, 00:20:45.511 "compare": false, 00:20:45.511 "compare_and_write": false, 00:20:45.511 "abort": false, 00:20:45.511 "seek_hole": false, 00:20:45.511 "seek_data": false, 00:20:45.511 "copy": false, 00:20:45.511 "nvme_iov_md": false 00:20:45.511 }, 00:20:45.511 "memory_domains": [ 00:20:45.511 { 00:20:45.511 "dma_device_id": "system", 00:20:45.511 "dma_device_type": 1 00:20:45.511 }, 00:20:45.511 { 00:20:45.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:45.511 "dma_device_type": 2 00:20:45.511 }, 00:20:45.511 { 00:20:45.511 "dma_device_id": "system", 00:20:45.511 "dma_device_type": 1 00:20:45.511 }, 00:20:45.511 { 00:20:45.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:45.511 "dma_device_type": 2 00:20:45.511 } 00:20:45.511 ], 00:20:45.511 "driver_specific": { 00:20:45.511 "raid": { 00:20:45.511 "uuid": "7198ca0e-463c-4cde-a7b5-3e1a8196a063", 00:20:45.511 "strip_size_kb": 0, 00:20:45.511 "state": "online", 00:20:45.511 "raid_level": "raid1", 00:20:45.511 "superblock": true, 00:20:45.511 "num_base_bdevs": 2, 00:20:45.511 "num_base_bdevs_discovered": 2, 00:20:45.511 "num_base_bdevs_operational": 2, 00:20:45.511 "base_bdevs_list": [ 00:20:45.511 { 00:20:45.511 "name": "BaseBdev1", 00:20:45.511 "uuid": "46f99428-27c9-49a2-82c9-8bf1e6d2bfab", 00:20:45.511 "is_configured": true, 00:20:45.511 "data_offset": 256, 00:20:45.511 "data_size": 7936 00:20:45.511 }, 00:20:45.511 { 00:20:45.511 "name": "BaseBdev2", 00:20:45.511 "uuid": "c5a1149a-30a7-4e76-af1e-c342cf23d9e7", 00:20:45.511 "is_configured": true, 00:20:45.511 "data_offset": 256, 00:20:45.511 "data_size": 7936 00:20:45.511 } 00:20:45.511 ] 00:20:45.511 } 00:20:45.511 } 00:20:45.511 }' 00:20:45.511 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:45.511 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='BaseBdev1 00:20:45.511 BaseBdev2' 00:20:45.511 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:45.511 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:45.511 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:45.511 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev1 00:20:45.511 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.511 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:45.511 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:45.511 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.511 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:45.511 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:45.511 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:45.511 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:45.511 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b BaseBdev2 00:20:45.511 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.511 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:45.511 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.511 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:45.511 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:45.511 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # rpc_cmd bdev_malloc_delete BaseBdev1 00:20:45.511 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.511 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:45.511 [2024-11-27 14:20:22.785086] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:45.771 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.771 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # local expected_state 00:20:45.771 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@261 -- # has_redundancy raid1 00:20:45.771 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:45.771 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:20:45.771 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # expected_state=online 00:20:45.771 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:20:45.771 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=Existed_Raid 00:20:45.771 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:45.771 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:45.771 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:45.771 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:45.771 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:45.771 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:45.771 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:45.771 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:45.771 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:45.771 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:20:45.771 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.771 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:45.771 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.771 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:45.771 "name": "Existed_Raid", 00:20:45.771 "uuid": "7198ca0e-463c-4cde-a7b5-3e1a8196a063", 00:20:45.771 "strip_size_kb": 0, 00:20:45.771 "state": "online", 00:20:45.771 "raid_level": "raid1", 00:20:45.771 "superblock": true, 00:20:45.771 "num_base_bdevs": 2, 00:20:45.771 "num_base_bdevs_discovered": 1, 00:20:45.771 "num_base_bdevs_operational": 1, 00:20:45.771 "base_bdevs_list": [ 00:20:45.771 { 00:20:45.771 "name": null, 00:20:45.771 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:45.771 "is_configured": false, 00:20:45.771 "data_offset": 0, 00:20:45.771 "data_size": 7936 00:20:45.771 }, 00:20:45.771 { 00:20:45.771 "name": "BaseBdev2", 00:20:45.771 "uuid": "c5a1149a-30a7-4e76-af1e-c342cf23d9e7", 00:20:45.771 "is_configured": true, 00:20:45.771 "data_offset": 256, 00:20:45.771 "data_size": 7936 00:20:45.771 } 00:20:45.771 ] 00:20:45.771 }' 00:20:45.771 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:45.771 14:20:22 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:46.341 14:20:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i = 1 )) 00:20:46.341 14:20:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:46.341 14:20:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # jq -r '.[0]["name"]' 00:20:46.341 14:20:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.341 14:20:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.341 14:20:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:46.341 14:20:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.341 14:20:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # raid_bdev=Existed_Raid 00:20:46.341 14:20:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@272 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:20:46.341 14:20:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # rpc_cmd bdev_malloc_delete BaseBdev2 00:20:46.341 14:20:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.341 14:20:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:46.341 [2024-11-27 14:20:23.456566] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:20:46.341 [2024-11-27 14:20:23.456916] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:46.341 [2024-11-27 14:20:23.543594] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:46.341 [2024-11-27 14:20:23.543651] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:46.341 [2024-11-27 14:20:23.543671] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name Existed_Raid, state offline 00:20:46.341 14:20:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.341 14:20:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i++ )) 00:20:46.341 14:20:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # (( i < num_base_bdevs )) 00:20:46.341 14:20:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:46.341 14:20:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:46.341 14:20:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:46.341 14:20:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # jq -r '.[0]["name"] | select(.)' 00:20:46.341 14:20:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:46.341 14:20:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@278 -- # raid_bdev= 00:20:46.341 14:20:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # '[' -n '' ']' 00:20:46.341 14:20:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@284 -- # '[' 2 -gt 2 ']' 00:20:46.341 14:20:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@326 -- # killprocess 88819 00:20:46.341 14:20:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 88819 ']' 00:20:46.341 14:20:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 88819 00:20:46.341 14:20:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:20:46.341 14:20:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:46.341 14:20:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88819 00:20:46.600 killing process with pid 88819 00:20:46.600 14:20:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:46.600 14:20:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:46.600 14:20:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88819' 00:20:46.600 14:20:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 88819 00:20:46.600 [2024-11-27 14:20:23.632621] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:46.600 14:20:23 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 88819 00:20:46.600 [2024-11-27 14:20:23.647045] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:47.539 14:20:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@328 -- # return 0 00:20:47.539 00:20:47.539 real 0m5.503s 00:20:47.539 user 0m8.316s 00:20:47.539 sys 0m0.829s 00:20:47.539 ************************************ 00:20:47.539 END TEST raid_state_function_test_sb_md_interleaved 00:20:47.539 ************************************ 00:20:47.539 14:20:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:47.539 14:20:24 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:47.539 14:20:24 bdev_raid -- bdev/bdev_raid.sh@1012 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:20:47.539 14:20:24 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:47.539 14:20:24 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:47.539 14:20:24 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:47.539 ************************************ 00:20:47.539 START TEST raid_superblock_test_md_interleaved 00:20:47.539 ************************************ 00:20:47.539 14:20:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1129 -- # raid_superblock_test raid1 2 00:20:47.539 14:20:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local raid_level=raid1 00:20:47.539 14:20:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local num_base_bdevs=2 00:20:47.539 14:20:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_malloc=() 00:20:47.539 14:20:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_malloc 00:20:47.539 14:20:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt=() 00:20:47.539 14:20:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt 00:20:47.539 14:20:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # base_bdevs_pt_uuid=() 00:20:47.539 14:20:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local base_bdevs_pt_uuid 00:20:47.539 14:20:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local raid_bdev_name=raid_bdev1 00:20:47.539 14:20:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size 00:20:47.539 14:20:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local strip_size_create_arg 00:20:47.539 14:20:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev_uuid 00:20:47.539 14:20:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@402 -- # local raid_bdev 00:20:47.539 14:20:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@404 -- # '[' raid1 '!=' raid1 ']' 00:20:47.539 14:20:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@408 -- # strip_size=0 00:20:47.539 14:20:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # raid_pid=89077 00:20:47.539 14:20:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -L bdev_raid 00:20:47.539 14:20:24 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@413 -- # waitforlisten 89077 00:20:47.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.539 14:20:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89077 ']' 00:20:47.540 14:20:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.540 14:20:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:47.540 14:20:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.540 14:20:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:47.540 14:20:24 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:47.540 [2024-11-27 14:20:24.813860] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:20:47.799 [2024-11-27 14:20:24.814955] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89077 ] 00:20:47.799 [2024-11-27 14:20:24.988727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.059 [2024-11-27 14:20:25.117609] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:48.059 [2024-11-27 14:20:25.308380] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:48.059 [2024-11-27 14:20:25.308687] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:48.627 14:20:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.627 14:20:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:20:48.627 14:20:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i = 1 )) 00:20:48.627 14:20:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:48.627 14:20:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc1 00:20:48.627 14:20:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt1 00:20:48.627 14:20:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:20:48.627 14:20:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:48.627 14:20:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:48.627 14:20:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:48.627 14:20:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:20:48.627 14:20:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.627 14:20:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:48.886 malloc1 00:20:48.886 14:20:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.886 14:20:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:48.886 14:20:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.886 14:20:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:48.886 [2024-11-27 14:20:25.948456] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:48.886 [2024-11-27 14:20:25.949095] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:48.886 [2024-11-27 14:20:25.949253] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:48.886 [2024-11-27 14:20:25.949355] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:48.886 [2024-11-27 14:20:25.951998] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:48.886 pt1 00:20:48.886 [2024-11-27 14:20:25.952277] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:48.886 14:20:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.886 14:20:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:48.886 14:20:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:48.886 14:20:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_malloc=malloc2 00:20:48.886 14:20:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt=pt2 00:20:48.886 14:20:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@419 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:20:48.886 14:20:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_malloc+=($bdev_malloc) 00:20:48.886 14:20:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt+=($bdev_pt) 00:20:48.886 14:20:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@423 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:20:48.886 14:20:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:20:48.886 14:20:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.886 14:20:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:48.886 malloc2 00:20:48.886 14:20:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.886 14:20:25 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@426 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:48.886 14:20:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.886 14:20:25 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:48.886 [2024-11-27 14:20:26.001315] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:48.886 [2024-11-27 14:20:26.001685] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:48.886 [2024-11-27 14:20:26.001935] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:48.886 [2024-11-27 14:20:26.002139] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:48.886 [2024-11-27 14:20:26.004706] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:48.886 [2024-11-27 14:20:26.004993] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:48.886 pt2 00:20:48.886 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.886 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i++ )) 00:20:48.886 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # (( i <= num_base_bdevs )) 00:20:48.886 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''pt1 pt2'\''' -n raid_bdev1 -s 00:20:48.886 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.886 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:48.886 [2024-11-27 14:20:26.013400] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:48.886 [2024-11-27 14:20:26.015975] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:48.886 [2024-11-27 14:20:26.016243] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:48.886 [2024-11-27 14:20:26.016261] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:48.886 [2024-11-27 14:20:26.016378] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005d40 00:20:48.886 [2024-11-27 14:20:26.016463] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:48.886 [2024-11-27 14:20:26.016480] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:48.886 [2024-11-27 14:20:26.016559] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:48.886 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.886 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:48.886 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:48.887 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:48.887 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:48.887 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:48.887 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:48.887 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:48.887 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:48.887 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:48.887 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:48.887 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:48.887 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.887 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:48.887 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:48.887 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.887 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:48.887 "name": "raid_bdev1", 00:20:48.887 "uuid": "37e20533-9957-436c-81b3-e5232a295804", 00:20:48.887 "strip_size_kb": 0, 00:20:48.887 "state": "online", 00:20:48.887 "raid_level": "raid1", 00:20:48.887 "superblock": true, 00:20:48.887 "num_base_bdevs": 2, 00:20:48.887 "num_base_bdevs_discovered": 2, 00:20:48.887 "num_base_bdevs_operational": 2, 00:20:48.887 "base_bdevs_list": [ 00:20:48.887 { 00:20:48.887 "name": "pt1", 00:20:48.887 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:48.887 "is_configured": true, 00:20:48.887 "data_offset": 256, 00:20:48.887 "data_size": 7936 00:20:48.887 }, 00:20:48.887 { 00:20:48.887 "name": "pt2", 00:20:48.887 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:48.887 "is_configured": true, 00:20:48.887 "data_offset": 256, 00:20:48.887 "data_size": 7936 00:20:48.887 } 00:20:48.887 ] 00:20:48.887 }' 00:20:48.887 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:48.887 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:49.454 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@432 -- # verify_raid_bdev_properties raid_bdev1 00:20:49.454 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:49.454 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:49.454 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:49.454 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:49.454 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:49.454 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:49.454 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:49.454 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.455 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:49.455 [2024-11-27 14:20:26.558011] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:49.455 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.455 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:49.455 "name": "raid_bdev1", 00:20:49.455 "aliases": [ 00:20:49.455 "37e20533-9957-436c-81b3-e5232a295804" 00:20:49.455 ], 00:20:49.455 "product_name": "Raid Volume", 00:20:49.455 "block_size": 4128, 00:20:49.455 "num_blocks": 7936, 00:20:49.455 "uuid": "37e20533-9957-436c-81b3-e5232a295804", 00:20:49.455 "md_size": 32, 00:20:49.455 "md_interleave": true, 00:20:49.455 "dif_type": 0, 00:20:49.455 "assigned_rate_limits": { 00:20:49.455 "rw_ios_per_sec": 0, 00:20:49.455 "rw_mbytes_per_sec": 0, 00:20:49.455 "r_mbytes_per_sec": 0, 00:20:49.455 "w_mbytes_per_sec": 0 00:20:49.455 }, 00:20:49.455 "claimed": false, 00:20:49.455 "zoned": false, 00:20:49.455 "supported_io_types": { 00:20:49.455 "read": true, 00:20:49.455 "write": true, 00:20:49.455 "unmap": false, 00:20:49.455 "flush": false, 00:20:49.455 "reset": true, 00:20:49.455 "nvme_admin": false, 00:20:49.455 "nvme_io": false, 00:20:49.455 "nvme_io_md": false, 00:20:49.455 "write_zeroes": true, 00:20:49.455 "zcopy": false, 00:20:49.455 "get_zone_info": false, 00:20:49.455 "zone_management": false, 00:20:49.455 "zone_append": false, 00:20:49.455 "compare": false, 00:20:49.455 "compare_and_write": false, 00:20:49.455 "abort": false, 00:20:49.455 "seek_hole": false, 00:20:49.455 "seek_data": false, 00:20:49.455 "copy": false, 00:20:49.455 "nvme_iov_md": false 00:20:49.455 }, 00:20:49.455 "memory_domains": [ 00:20:49.455 { 00:20:49.455 "dma_device_id": "system", 00:20:49.455 "dma_device_type": 1 00:20:49.455 }, 00:20:49.455 { 00:20:49.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:49.455 "dma_device_type": 2 00:20:49.455 }, 00:20:49.455 { 00:20:49.455 "dma_device_id": "system", 00:20:49.455 "dma_device_type": 1 00:20:49.455 }, 00:20:49.455 { 00:20:49.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:49.455 "dma_device_type": 2 00:20:49.455 } 00:20:49.455 ], 00:20:49.455 "driver_specific": { 00:20:49.455 "raid": { 00:20:49.455 "uuid": "37e20533-9957-436c-81b3-e5232a295804", 00:20:49.455 "strip_size_kb": 0, 00:20:49.455 "state": "online", 00:20:49.455 "raid_level": "raid1", 00:20:49.455 "superblock": true, 00:20:49.455 "num_base_bdevs": 2, 00:20:49.455 "num_base_bdevs_discovered": 2, 00:20:49.455 "num_base_bdevs_operational": 2, 00:20:49.455 "base_bdevs_list": [ 00:20:49.455 { 00:20:49.455 "name": "pt1", 00:20:49.455 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:49.455 "is_configured": true, 00:20:49.455 "data_offset": 256, 00:20:49.455 "data_size": 7936 00:20:49.455 }, 00:20:49.455 { 00:20:49.455 "name": "pt2", 00:20:49.455 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:49.455 "is_configured": true, 00:20:49.455 "data_offset": 256, 00:20:49.455 "data_size": 7936 00:20:49.455 } 00:20:49.455 ] 00:20:49.455 } 00:20:49.455 } 00:20:49.455 }' 00:20:49.455 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:49.455 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:49.455 pt2' 00:20:49.455 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:49.455 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:49.455 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:49.455 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:49.455 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.455 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:49.455 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:49.455 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.715 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:49.715 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:49.715 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:49.715 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:49.715 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.715 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # jq -r '.[] | .uuid' 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:49.716 [2024-11-27 14:20:26.830062] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # raid_bdev_uuid=37e20533-9957-436c-81b3-e5232a295804 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@436 -- # '[' -z 37e20533-9957-436c-81b3-e5232a295804 ']' 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:49.716 [2024-11-27 14:20:26.881646] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:49.716 [2024-11-27 14:20:26.881837] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:49.716 [2024-11-27 14:20:26.882065] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:49.716 [2024-11-27 14:20:26.882291] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:49.716 [2024-11-27 14:20:26.882430] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # jq -r '.[]' 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # raid_bdev= 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@443 -- # '[' -n '' ']' 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt1 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # for i in "${base_bdevs_pt[@]}" 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@449 -- # rpc_cmd bdev_passthru_delete pt2 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # rpc_cmd bdev_get_bdevs 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:49.716 14:20:26 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.975 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@451 -- # '[' false == true ']' 00:20:49.975 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@457 -- # NOT rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:49.975 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:20:49.975 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:49.975 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:20:49.975 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:49.975 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:20:49.975 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:49.975 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_create -r raid1 -b ''\''malloc1 malloc2'\''' -n raid_bdev1 00:20:49.975 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.975 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:49.975 [2024-11-27 14:20:27.029730] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:20:49.975 [2024-11-27 14:20:27.032494] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:20:49.975 [2024-11-27 14:20:27.032587] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:20:49.975 [2024-11-27 14:20:27.033124] bdev_raid.c:3233:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:20:49.975 [2024-11-27 14:20:27.033248] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:49.975 [2024-11-27 14:20:27.033275] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state configuring 00:20:49.975 request: 00:20:49.975 { 00:20:49.975 "name": "raid_bdev1", 00:20:49.976 "raid_level": "raid1", 00:20:49.976 "base_bdevs": [ 00:20:49.976 "malloc1", 00:20:49.976 "malloc2" 00:20:49.976 ], 00:20:49.976 "superblock": false, 00:20:49.976 "method": "bdev_raid_create", 00:20:49.976 "req_id": 1 00:20:49.976 } 00:20:49.976 Got JSON-RPC error response 00:20:49.976 response: 00:20:49.976 { 00:20:49.976 "code": -17, 00:20:49.976 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:20:49.976 } 00:20:49.976 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:20:49.976 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:20:49.976 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:49.976 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:49.976 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:49.976 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.976 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # jq -r '.[]' 00:20:49.976 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.976 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:49.976 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.976 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # raid_bdev= 00:20:49.976 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@460 -- # '[' -n '' ']' 00:20:49.976 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@465 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:49.976 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.976 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:49.976 [2024-11-27 14:20:27.097840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:49.976 [2024-11-27 14:20:27.098258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:49.976 [2024-11-27 14:20:27.098485] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008a80 00:20:49.976 [2024-11-27 14:20:27.098730] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:49.976 [2024-11-27 14:20:27.101600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:49.976 [2024-11-27 14:20:27.101902] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:49.976 [2024-11-27 14:20:27.102182] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:49.976 [2024-11-27 14:20:27.102401] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:49.976 pt1 00:20:49.976 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.976 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@468 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:20:49.976 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:49.976 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=configuring 00:20:49.976 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:49.976 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:49.976 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:49.976 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:49.976 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:49.976 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:49.976 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:49.976 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:49.976 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:49.976 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.976 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:49.976 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.976 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:49.976 "name": "raid_bdev1", 00:20:49.976 "uuid": "37e20533-9957-436c-81b3-e5232a295804", 00:20:49.976 "strip_size_kb": 0, 00:20:49.976 "state": "configuring", 00:20:49.976 "raid_level": "raid1", 00:20:49.976 "superblock": true, 00:20:49.976 "num_base_bdevs": 2, 00:20:49.976 "num_base_bdevs_discovered": 1, 00:20:49.976 "num_base_bdevs_operational": 2, 00:20:49.976 "base_bdevs_list": [ 00:20:49.976 { 00:20:49.976 "name": "pt1", 00:20:49.976 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:49.976 "is_configured": true, 00:20:49.976 "data_offset": 256, 00:20:49.976 "data_size": 7936 00:20:49.976 }, 00:20:49.976 { 00:20:49.976 "name": null, 00:20:49.976 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:49.976 "is_configured": false, 00:20:49.976 "data_offset": 256, 00:20:49.976 "data_size": 7936 00:20:49.976 } 00:20:49.976 ] 00:20:49.976 }' 00:20:49.976 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:49.976 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:50.544 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@470 -- # '[' 2 -gt 2 ']' 00:20:50.544 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i = 1 )) 00:20:50.544 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:50.544 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@479 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:50.544 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.544 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:50.544 [2024-11-27 14:20:27.650510] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:50.544 [2024-11-27 14:20:27.650919] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:50.544 [2024-11-27 14:20:27.651152] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:50.544 [2024-11-27 14:20:27.651392] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:50.544 [2024-11-27 14:20:27.651833] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:50.544 [2024-11-27 14:20:27.651946] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:50.545 [2024-11-27 14:20:27.652106] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:50.545 [2024-11-27 14:20:27.652150] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:50.545 [2024-11-27 14:20:27.652275] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007e80 00:20:50.545 [2024-11-27 14:20:27.652299] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:50.545 [2024-11-27 14:20:27.652390] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:50.545 [2024-11-27 14:20:27.652482] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007e80 00:20:50.545 [2024-11-27 14:20:27.652496] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007e80 00:20:50.545 [2024-11-27 14:20:27.652577] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:50.545 pt2 00:20:50.545 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.545 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i++ )) 00:20:50.545 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # (( i < num_base_bdevs )) 00:20:50.545 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:50.545 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:50.545 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:50.545 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:50.545 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:50.545 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:50.545 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:50.545 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:50.545 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:50.545 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:50.545 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:50.545 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:50.545 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.545 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:50.545 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.545 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:50.545 "name": "raid_bdev1", 00:20:50.545 "uuid": "37e20533-9957-436c-81b3-e5232a295804", 00:20:50.545 "strip_size_kb": 0, 00:20:50.545 "state": "online", 00:20:50.545 "raid_level": "raid1", 00:20:50.545 "superblock": true, 00:20:50.545 "num_base_bdevs": 2, 00:20:50.545 "num_base_bdevs_discovered": 2, 00:20:50.545 "num_base_bdevs_operational": 2, 00:20:50.545 "base_bdevs_list": [ 00:20:50.545 { 00:20:50.545 "name": "pt1", 00:20:50.545 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:50.545 "is_configured": true, 00:20:50.545 "data_offset": 256, 00:20:50.545 "data_size": 7936 00:20:50.545 }, 00:20:50.545 { 00:20:50.545 "name": "pt2", 00:20:50.545 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:50.545 "is_configured": true, 00:20:50.545 "data_offset": 256, 00:20:50.545 "data_size": 7936 00:20:50.545 } 00:20:50.545 ] 00:20:50.545 }' 00:20:50.545 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:50.545 14:20:27 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:51.112 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@484 -- # verify_raid_bdev_properties raid_bdev1 00:20:51.112 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@181 -- # local raid_bdev_name=raid_bdev1 00:20:51.112 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_info 00:20:51.112 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@183 -- # local base_bdev_names 00:20:51.112 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@184 -- # local name 00:20:51.112 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@185 -- # local cmp_raid_bdev cmp_base_bdev 00:20:51.112 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:51.112 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq '.[]' 00:20:51.112 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.112 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:51.112 [2024-11-27 14:20:28.179092] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:51.112 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.112 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:20:51.112 "name": "raid_bdev1", 00:20:51.112 "aliases": [ 00:20:51.112 "37e20533-9957-436c-81b3-e5232a295804" 00:20:51.112 ], 00:20:51.112 "product_name": "Raid Volume", 00:20:51.112 "block_size": 4128, 00:20:51.112 "num_blocks": 7936, 00:20:51.112 "uuid": "37e20533-9957-436c-81b3-e5232a295804", 00:20:51.112 "md_size": 32, 00:20:51.112 "md_interleave": true, 00:20:51.112 "dif_type": 0, 00:20:51.112 "assigned_rate_limits": { 00:20:51.112 "rw_ios_per_sec": 0, 00:20:51.112 "rw_mbytes_per_sec": 0, 00:20:51.112 "r_mbytes_per_sec": 0, 00:20:51.112 "w_mbytes_per_sec": 0 00:20:51.112 }, 00:20:51.112 "claimed": false, 00:20:51.112 "zoned": false, 00:20:51.112 "supported_io_types": { 00:20:51.112 "read": true, 00:20:51.112 "write": true, 00:20:51.112 "unmap": false, 00:20:51.112 "flush": false, 00:20:51.112 "reset": true, 00:20:51.112 "nvme_admin": false, 00:20:51.112 "nvme_io": false, 00:20:51.112 "nvme_io_md": false, 00:20:51.112 "write_zeroes": true, 00:20:51.112 "zcopy": false, 00:20:51.112 "get_zone_info": false, 00:20:51.112 "zone_management": false, 00:20:51.112 "zone_append": false, 00:20:51.112 "compare": false, 00:20:51.112 "compare_and_write": false, 00:20:51.112 "abort": false, 00:20:51.112 "seek_hole": false, 00:20:51.112 "seek_data": false, 00:20:51.112 "copy": false, 00:20:51.112 "nvme_iov_md": false 00:20:51.112 }, 00:20:51.112 "memory_domains": [ 00:20:51.112 { 00:20:51.112 "dma_device_id": "system", 00:20:51.112 "dma_device_type": 1 00:20:51.112 }, 00:20:51.112 { 00:20:51.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:51.112 "dma_device_type": 2 00:20:51.112 }, 00:20:51.112 { 00:20:51.112 "dma_device_id": "system", 00:20:51.112 "dma_device_type": 1 00:20:51.112 }, 00:20:51.112 { 00:20:51.112 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:20:51.112 "dma_device_type": 2 00:20:51.112 } 00:20:51.112 ], 00:20:51.112 "driver_specific": { 00:20:51.112 "raid": { 00:20:51.112 "uuid": "37e20533-9957-436c-81b3-e5232a295804", 00:20:51.112 "strip_size_kb": 0, 00:20:51.112 "state": "online", 00:20:51.112 "raid_level": "raid1", 00:20:51.112 "superblock": true, 00:20:51.112 "num_base_bdevs": 2, 00:20:51.112 "num_base_bdevs_discovered": 2, 00:20:51.112 "num_base_bdevs_operational": 2, 00:20:51.112 "base_bdevs_list": [ 00:20:51.112 { 00:20:51.112 "name": "pt1", 00:20:51.112 "uuid": "00000000-0000-0000-0000-000000000001", 00:20:51.112 "is_configured": true, 00:20:51.112 "data_offset": 256, 00:20:51.112 "data_size": 7936 00:20:51.112 }, 00:20:51.112 { 00:20:51.112 "name": "pt2", 00:20:51.112 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:51.112 "is_configured": true, 00:20:51.112 "data_offset": 256, 00:20:51.112 "data_size": 7936 00:20:51.112 } 00:20:51.112 ] 00:20:51.112 } 00:20:51.112 } 00:20:51.112 }' 00:20:51.112 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:20:51.112 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@188 -- # base_bdev_names='pt1 00:20:51.112 pt2' 00:20:51.112 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '[.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:51.112 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@189 -- # cmp_raid_bdev='4128 32 true 0' 00:20:51.113 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:51.113 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:51.113 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt1 00:20:51.113 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.113 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:51.113 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@191 -- # for name in $base_bdev_names 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # rpc_cmd bdev_get_bdevs -b pt2 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # jq -r '.[] | [.block_size, .md_size, .md_interleave, .dif_type] | join(" ")' 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@192 -- # cmp_base_bdev='4128 32 true 0' 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@193 -- # [[ 4128 32 true 0 == \4\1\2\8\ \3\2\ \t\r\u\e\ \0 ]] 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # jq -r '.[] | .uuid' 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:51.371 [2024-11-27 14:20:28.455180] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@487 -- # '[' 37e20533-9957-436c-81b3-e5232a295804 '!=' 37e20533-9957-436c-81b3-e5232a295804 ']' 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@491 -- # has_redundancy raid1 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # case $1 in 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@199 -- # return 0 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@493 -- # rpc_cmd bdev_passthru_delete pt1 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:51.371 [2024-11-27 14:20:28.506921] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@496 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:51.371 "name": "raid_bdev1", 00:20:51.371 "uuid": "37e20533-9957-436c-81b3-e5232a295804", 00:20:51.371 "strip_size_kb": 0, 00:20:51.371 "state": "online", 00:20:51.371 "raid_level": "raid1", 00:20:51.371 "superblock": true, 00:20:51.371 "num_base_bdevs": 2, 00:20:51.371 "num_base_bdevs_discovered": 1, 00:20:51.371 "num_base_bdevs_operational": 1, 00:20:51.371 "base_bdevs_list": [ 00:20:51.371 { 00:20:51.371 "name": null, 00:20:51.371 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.371 "is_configured": false, 00:20:51.371 "data_offset": 0, 00:20:51.371 "data_size": 7936 00:20:51.371 }, 00:20:51.371 { 00:20:51.371 "name": "pt2", 00:20:51.371 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:51.371 "is_configured": true, 00:20:51.371 "data_offset": 256, 00:20:51.371 "data_size": 7936 00:20:51.371 } 00:20:51.371 ] 00:20:51.371 }' 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:51.371 14:20:28 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:51.937 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:51.937 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.937 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:51.937 [2024-11-27 14:20:29.035033] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:51.937 [2024-11-27 14:20:29.035191] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:51.937 [2024-11-27 14:20:29.035308] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:51.937 [2024-11-27 14:20:29.035373] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:51.937 [2024-11-27 14:20:29.035392] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007e80 name raid_bdev1, state offline 00:20:51.937 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.937 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.937 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # jq -r '.[]' 00:20:51.937 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.937 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:51.937 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.937 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # raid_bdev= 00:20:51.937 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@501 -- # '[' -n '' ']' 00:20:51.937 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i = 1 )) 00:20:51.937 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:51.937 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@507 -- # rpc_cmd bdev_passthru_delete pt2 00:20:51.937 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.937 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:51.938 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.938 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i++ )) 00:20:51.938 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # (( i < num_base_bdevs )) 00:20:51.938 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i = 1 )) 00:20:51.938 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@511 -- # (( i < num_base_bdevs - 1 )) 00:20:51.938 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # i=1 00:20:51.938 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@520 -- # rpc_cmd bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:20:51.938 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.938 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:51.938 [2024-11-27 14:20:29.115041] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:20:51.938 [2024-11-27 14:20:29.115665] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:51.938 [2024-11-27 14:20:29.115953] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009380 00:20:51.938 [2024-11-27 14:20:29.116167] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:51.938 [2024-11-27 14:20:29.118858] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:51.938 [2024-11-27 14:20:29.118919] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:20:51.938 [2024-11-27 14:20:29.118990] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:20:51.938 [2024-11-27 14:20:29.119056] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:51.938 pt2 00:20:51.938 [2024-11-27 14:20:29.119220] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008200 00:20:51.938 [2024-11-27 14:20:29.119249] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:51.938 [2024-11-27 14:20:29.119359] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:51.938 [2024-11-27 14:20:29.119450] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008200 00:20:51.938 [2024-11-27 14:20:29.119463] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008200 00:20:51.938 [2024-11-27 14:20:29.119573] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:51.938 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.938 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@523 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:51.938 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:51.938 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:51.938 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:51.938 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:51.938 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:51.938 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:51.938 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:51.938 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:51.938 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:51.938 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:51.938 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.938 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:51.938 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:51.938 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.938 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:51.938 "name": "raid_bdev1", 00:20:51.938 "uuid": "37e20533-9957-436c-81b3-e5232a295804", 00:20:51.938 "strip_size_kb": 0, 00:20:51.938 "state": "online", 00:20:51.938 "raid_level": "raid1", 00:20:51.938 "superblock": true, 00:20:51.938 "num_base_bdevs": 2, 00:20:51.938 "num_base_bdevs_discovered": 1, 00:20:51.938 "num_base_bdevs_operational": 1, 00:20:51.938 "base_bdevs_list": [ 00:20:51.938 { 00:20:51.938 "name": null, 00:20:51.938 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:51.938 "is_configured": false, 00:20:51.938 "data_offset": 256, 00:20:51.938 "data_size": 7936 00:20:51.938 }, 00:20:51.938 { 00:20:51.938 "name": "pt2", 00:20:51.938 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:51.938 "is_configured": true, 00:20:51.938 "data_offset": 256, 00:20:51.938 "data_size": 7936 00:20:51.938 } 00:20:51.938 ] 00:20:51.938 }' 00:20:51.938 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:51.938 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:52.503 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:20:52.503 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.504 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:52.504 [2024-11-27 14:20:29.643266] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:52.504 [2024-11-27 14:20:29.643300] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:20:52.504 [2024-11-27 14:20:29.643405] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:52.504 [2024-11-27 14:20:29.643469] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:52.504 [2024-11-27 14:20:29.643483] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008200 name raid_bdev1, state offline 00:20:52.504 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.504 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # jq -r '.[]' 00:20:52.504 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.504 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.504 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:52.504 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.504 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # raid_bdev= 00:20:52.504 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@528 -- # '[' -n '' ']' 00:20:52.504 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@532 -- # '[' 2 -gt 2 ']' 00:20:52.504 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@540 -- # rpc_cmd bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:20:52.504 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.504 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:52.504 [2024-11-27 14:20:29.703285] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:20:52.504 [2024-11-27 14:20:29.703545] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:52.504 [2024-11-27 14:20:29.703696] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009980 00:20:52.504 [2024-11-27 14:20:29.703831] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:52.504 [2024-11-27 14:20:29.706427] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:52.504 [2024-11-27 14:20:29.706463] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:20:52.504 [2024-11-27 14:20:29.706550] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:20:52.504 [2024-11-27 14:20:29.706606] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:20:52.504 [2024-11-27 14:20:29.706730] bdev_raid.c:3685:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:20:52.504 [2024-11-27 14:20:29.706747] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:20:52.504 [2024-11-27 14:20:29.706770] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008580 name raid_bdev1, state configuring 00:20:52.504 [2024-11-27 14:20:29.707022] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:20:52.504 [2024-11-27 14:20:29.707260] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000008900 00:20:52.504 [2024-11-27 14:20:29.707376] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:52.504 [2024-11-27 14:20:29.707507] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:52.504 [2024-11-27 14:20:29.707708] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000008900 00:20:52.504 [2024-11-27 14:20:29.707852] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000008900 00:20:52.504 [2024-11-27 14:20:29.708128] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:52.504 pt1 00:20:52.504 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.504 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@542 -- # '[' 2 -gt 2 ']' 00:20:52.504 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:52.504 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:52.504 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:52.504 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:52.504 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:52.504 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:52.504 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:52.504 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:52.504 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:52.504 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:52.504 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:52.504 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:52.504 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.504 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:52.504 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.504 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:52.504 "name": "raid_bdev1", 00:20:52.504 "uuid": "37e20533-9957-436c-81b3-e5232a295804", 00:20:52.504 "strip_size_kb": 0, 00:20:52.504 "state": "online", 00:20:52.504 "raid_level": "raid1", 00:20:52.504 "superblock": true, 00:20:52.504 "num_base_bdevs": 2, 00:20:52.504 "num_base_bdevs_discovered": 1, 00:20:52.504 "num_base_bdevs_operational": 1, 00:20:52.504 "base_bdevs_list": [ 00:20:52.504 { 00:20:52.504 "name": null, 00:20:52.504 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:52.504 "is_configured": false, 00:20:52.504 "data_offset": 256, 00:20:52.504 "data_size": 7936 00:20:52.504 }, 00:20:52.504 { 00:20:52.504 "name": "pt2", 00:20:52.504 "uuid": "00000000-0000-0000-0000-000000000002", 00:20:52.504 "is_configured": true, 00:20:52.504 "data_offset": 256, 00:20:52.504 "data_size": 7936 00:20:52.504 } 00:20:52.504 ] 00:20:52.504 }' 00:20:52.504 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:52.504 14:20:29 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:53.098 14:20:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # rpc_cmd bdev_raid_get_bdevs online 00:20:53.098 14:20:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:20:53.098 14:20:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.098 14:20:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:53.098 14:20:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.098 14:20:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@555 -- # [[ false == \f\a\l\s\e ]] 00:20:53.098 14:20:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # jq -r '.[] | .uuid' 00:20:53.098 14:20:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:53.098 14:20:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.098 14:20:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:53.098 [2024-11-27 14:20:30.299915] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:53.098 14:20:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.098 14:20:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@558 -- # '[' 37e20533-9957-436c-81b3-e5232a295804 '!=' 37e20533-9957-436c-81b3-e5232a295804 ']' 00:20:53.098 14:20:30 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@563 -- # killprocess 89077 00:20:53.098 14:20:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89077 ']' 00:20:53.098 14:20:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89077 00:20:53.098 14:20:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:20:53.098 14:20:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:53.098 14:20:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89077 00:20:53.380 killing process with pid 89077 00:20:53.380 14:20:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:53.380 14:20:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:53.380 14:20:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89077' 00:20:53.380 14:20:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@973 -- # kill 89077 00:20:53.380 [2024-11-27 14:20:30.377707] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:20:53.380 [2024-11-27 14:20:30.377825] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:20:53.380 14:20:30 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@978 -- # wait 89077 00:20:53.380 [2024-11-27 14:20:30.377890] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:20:53.380 [2024-11-27 14:20:30.377912] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000008900 name raid_bdev1, state offline 00:20:53.380 [2024-11-27 14:20:30.562060] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:20:54.758 ************************************ 00:20:54.758 END TEST raid_superblock_test_md_interleaved 00:20:54.758 ************************************ 00:20:54.758 14:20:31 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@565 -- # return 0 00:20:54.758 00:20:54.758 real 0m6.904s 00:20:54.758 user 0m10.977s 00:20:54.758 sys 0m0.978s 00:20:54.758 14:20:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:54.758 14:20:31 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:54.758 14:20:31 bdev_raid -- bdev/bdev_raid.sh@1013 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:20:54.758 14:20:31 bdev_raid -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:20:54.758 14:20:31 bdev_raid -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:54.758 14:20:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:20:54.758 ************************************ 00:20:54.758 START TEST raid_rebuild_test_sb_md_interleaved 00:20:54.758 ************************************ 00:20:54.758 14:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1129 -- # raid_rebuild_test raid1 2 true false false 00:20:54.758 14:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local raid_level=raid1 00:20:54.758 14:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local num_base_bdevs=2 00:20:54.758 14:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local superblock=true 00:20:54.758 14:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local background_io=false 00:20:54.758 14:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local verify=false 00:20:54.758 14:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i = 1 )) 00:20:54.758 14:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:54.758 14:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev1 00:20:54.758 14:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:54.758 14:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:54.758 14:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # echo BaseBdev2 00:20:54.758 14:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i++ )) 00:20:54.758 14:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # (( i <= num_base_bdevs )) 00:20:54.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:54.758 14:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:20:54.758 14:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local base_bdevs 00:20:54.758 14:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local raid_bdev_name=raid_bdev1 00:20:54.758 14:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local strip_size 00:20:54.758 14:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local create_arg 00:20:54.758 14:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local raid_bdev_size 00:20:54.758 14:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@579 -- # local data_offset 00:20:54.758 14:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@581 -- # '[' raid1 '!=' raid1 ']' 00:20:54.758 14:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@589 -- # strip_size=0 00:20:54.758 14:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # '[' true = true ']' 00:20:54.758 14:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@593 -- # create_arg+=' -s' 00:20:54.758 14:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # raid_pid=89406 00:20:54.758 14:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@598 -- # waitforlisten 89406 00:20:54.758 14:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@835 -- # '[' -z 89406 ']' 00:20:54.758 14:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:54.758 14:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:54.758 14:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:54.758 14:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:20:54.758 14:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:54.758 14:20:31 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:54.758 [2024-11-27 14:20:31.849384] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:20:54.758 [2024-11-27 14:20:31.850506] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid89406 ] 00:20:54.758 I/O size of 3145728 is greater than zero copy threshold (65536). 00:20:54.758 Zero copy mechanism will not be used. 00:20:55.017 [2024-11-27 14:20:32.036118] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:55.017 [2024-11-27 14:20:32.157682] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:55.276 [2024-11-27 14:20:32.349759] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:55.276 [2024-11-27 14:20:32.350056] bdev_raid.c:1456:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:20:55.535 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:55.535 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@868 -- # return 0 00:20:55.535 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:55.535 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:20:55.535 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.535 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:55.794 BaseBdev1_malloc 00:20:55.794 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.794 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:20:55.794 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.794 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:55.794 [2024-11-27 14:20:32.842270] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:20:55.794 [2024-11-27 14:20:32.842341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:55.794 [2024-11-27 14:20:32.842371] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007280 00:20:55.794 [2024-11-27 14:20:32.842389] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:55.794 [2024-11-27 14:20:32.844990] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:55.794 [2024-11-27 14:20:32.845224] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:20:55.794 BaseBdev1 00:20:55.794 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.794 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # for bdev in "${base_bdevs[@]}" 00:20:55.794 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:20:55.794 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.794 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:55.794 BaseBdev2_malloc 00:20:55.794 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.794 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@603 -- # rpc_cmd bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:20:55.794 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.794 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:55.794 [2024-11-27 14:20:32.893976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:20:55.794 [2024-11-27 14:20:32.894237] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:55.794 [2024-11-27 14:20:32.894365] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000007e80 00:20:55.794 [2024-11-27 14:20:32.894397] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:55.794 [2024-11-27 14:20:32.896982] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:55.794 [2024-11-27 14:20:32.897045] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:20:55.794 BaseBdev2 00:20:55.795 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.795 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # rpc_cmd bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:20:55.795 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.795 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:55.795 spare_malloc 00:20:55.795 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.795 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # rpc_cmd bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:20:55.795 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.795 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:55.795 spare_delay 00:20:55.795 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.795 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@609 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:20:55.795 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.795 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:55.795 [2024-11-27 14:20:32.967311] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:20:55.795 [2024-11-27 14:20:32.967540] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:20:55.795 [2024-11-27 14:20:32.967671] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009080 00:20:55.795 [2024-11-27 14:20:32.967828] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:20:55.795 [2024-11-27 14:20:32.970247] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:20:55.795 [2024-11-27 14:20:32.970312] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:20:55.795 spare 00:20:55.795 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.795 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # rpc_cmd bdev_raid_create -s -r raid1 -b ''\''BaseBdev1 BaseBdev2'\''' -n raid_bdev1 00:20:55.795 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.795 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:55.795 [2024-11-27 14:20:32.975342] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:20:55.795 [2024-11-27 14:20:32.977919] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:20:55.795 [2024-11-27 14:20:32.978342] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007780 00:20:55.795 [2024-11-27 14:20:32.978505] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:20:55.795 [2024-11-27 14:20:32.978716] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005e10 00:20:55.795 [2024-11-27 14:20:32.978986] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007780 00:20:55.795 [2024-11-27 14:20:32.979105] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007780 00:20:55.795 [2024-11-27 14:20:32.979466] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:55.795 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.795 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@613 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:20:55.795 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:55.795 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:55.795 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:55.795 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:55.795 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:20:55.795 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:55.795 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:55.795 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:55.795 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:55.795 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:55.795 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:55.795 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.795 14:20:32 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:55.795 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.795 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:55.795 "name": "raid_bdev1", 00:20:55.795 "uuid": "dbc9db91-87c1-426e-b257-60f4269f2ecc", 00:20:55.795 "strip_size_kb": 0, 00:20:55.795 "state": "online", 00:20:55.795 "raid_level": "raid1", 00:20:55.795 "superblock": true, 00:20:55.795 "num_base_bdevs": 2, 00:20:55.795 "num_base_bdevs_discovered": 2, 00:20:55.795 "num_base_bdevs_operational": 2, 00:20:55.795 "base_bdevs_list": [ 00:20:55.795 { 00:20:55.795 "name": "BaseBdev1", 00:20:55.795 "uuid": "f1ca1e41-6d3e-5160-b66c-75dacec7c023", 00:20:55.795 "is_configured": true, 00:20:55.795 "data_offset": 256, 00:20:55.795 "data_size": 7936 00:20:55.795 }, 00:20:55.795 { 00:20:55.795 "name": "BaseBdev2", 00:20:55.795 "uuid": "33e28191-1dcf-5b95-a07d-19cc208163f0", 00:20:55.795 "is_configured": true, 00:20:55.795 "data_offset": 256, 00:20:55.795 "data_size": 7936 00:20:55.795 } 00:20:55.795 ] 00:20:55.795 }' 00:20:55.795 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:55.795 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:56.363 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # rpc_cmd bdev_get_bdevs -b raid_bdev1 00:20:56.363 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # jq -r '.[].num_blocks' 00:20:56.363 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.363 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:56.363 [2024-11-27 14:20:33.512062] bdev_raid.c:1133:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:20:56.363 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.363 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@616 -- # raid_bdev_size=7936 00:20:56.363 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.363 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.363 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:56.363 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:20:56.363 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.363 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@619 -- # data_offset=256 00:20:56.363 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@621 -- # '[' false = true ']' 00:20:56.363 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@624 -- # '[' false = true ']' 00:20:56.363 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@640 -- # rpc_cmd bdev_raid_remove_base_bdev BaseBdev1 00:20:56.363 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.363 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:56.363 [2024-11-27 14:20:33.619738] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:20:56.363 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.363 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@643 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:56.363 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:56.363 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:56.363 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:56.363 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:56.363 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:56.363 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:56.363 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:56.363 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:56.363 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:56.363 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:56.363 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:56.363 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.363 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:56.622 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.622 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:56.622 "name": "raid_bdev1", 00:20:56.622 "uuid": "dbc9db91-87c1-426e-b257-60f4269f2ecc", 00:20:56.622 "strip_size_kb": 0, 00:20:56.622 "state": "online", 00:20:56.622 "raid_level": "raid1", 00:20:56.622 "superblock": true, 00:20:56.622 "num_base_bdevs": 2, 00:20:56.622 "num_base_bdevs_discovered": 1, 00:20:56.622 "num_base_bdevs_operational": 1, 00:20:56.622 "base_bdevs_list": [ 00:20:56.622 { 00:20:56.622 "name": null, 00:20:56.622 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:56.622 "is_configured": false, 00:20:56.622 "data_offset": 0, 00:20:56.622 "data_size": 7936 00:20:56.622 }, 00:20:56.622 { 00:20:56.622 "name": "BaseBdev2", 00:20:56.622 "uuid": "33e28191-1dcf-5b95-a07d-19cc208163f0", 00:20:56.622 "is_configured": true, 00:20:56.622 "data_offset": 256, 00:20:56.622 "data_size": 7936 00:20:56.622 } 00:20:56.622 ] 00:20:56.622 }' 00:20:56.622 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:56.622 14:20:33 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:57.191 14:20:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:57.191 14:20:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.191 14:20:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:57.191 [2024-11-27 14:20:34.183985] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:57.191 [2024-11-27 14:20:34.200766] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005ee0 00:20:57.191 14:20:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.191 14:20:34 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@647 -- # sleep 1 00:20:57.191 [2024-11-27 14:20:34.203530] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:58.197 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@650 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:58.197 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:58.197 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:58.197 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:58.197 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:58.197 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.197 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.197 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:58.197 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.197 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.197 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:58.197 "name": "raid_bdev1", 00:20:58.197 "uuid": "dbc9db91-87c1-426e-b257-60f4269f2ecc", 00:20:58.197 "strip_size_kb": 0, 00:20:58.197 "state": "online", 00:20:58.197 "raid_level": "raid1", 00:20:58.197 "superblock": true, 00:20:58.197 "num_base_bdevs": 2, 00:20:58.197 "num_base_bdevs_discovered": 2, 00:20:58.197 "num_base_bdevs_operational": 2, 00:20:58.197 "process": { 00:20:58.197 "type": "rebuild", 00:20:58.197 "target": "spare", 00:20:58.197 "progress": { 00:20:58.197 "blocks": 2560, 00:20:58.197 "percent": 32 00:20:58.197 } 00:20:58.197 }, 00:20:58.197 "base_bdevs_list": [ 00:20:58.197 { 00:20:58.197 "name": "spare", 00:20:58.197 "uuid": "440a2708-f715-5ed2-8f15-126515cc6b77", 00:20:58.197 "is_configured": true, 00:20:58.197 "data_offset": 256, 00:20:58.197 "data_size": 7936 00:20:58.197 }, 00:20:58.197 { 00:20:58.197 "name": "BaseBdev2", 00:20:58.197 "uuid": "33e28191-1dcf-5b95-a07d-19cc208163f0", 00:20:58.197 "is_configured": true, 00:20:58.197 "data_offset": 256, 00:20:58.197 "data_size": 7936 00:20:58.197 } 00:20:58.197 ] 00:20:58.197 }' 00:20:58.197 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:58.197 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:20:58.197 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:58.197 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:20:58.197 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@653 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:20:58.197 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.197 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:58.197 [2024-11-27 14:20:35.376978] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:58.197 [2024-11-27 14:20:35.412978] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:20:58.197 [2024-11-27 14:20:35.413228] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:20:58.197 [2024-11-27 14:20:35.413256] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:20:58.197 [2024-11-27 14:20:35.413277] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:20:58.197 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.197 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@656 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:20:58.197 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:20:58.197 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:20:58.197 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:20:58.197 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:20:58.197 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:20:58.197 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:20:58.197 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:20:58.197 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:20:58.197 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:20:58.197 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.197 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.197 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.197 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:58.197 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.456 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:20:58.456 "name": "raid_bdev1", 00:20:58.456 "uuid": "dbc9db91-87c1-426e-b257-60f4269f2ecc", 00:20:58.456 "strip_size_kb": 0, 00:20:58.456 "state": "online", 00:20:58.456 "raid_level": "raid1", 00:20:58.456 "superblock": true, 00:20:58.456 "num_base_bdevs": 2, 00:20:58.456 "num_base_bdevs_discovered": 1, 00:20:58.456 "num_base_bdevs_operational": 1, 00:20:58.456 "base_bdevs_list": [ 00:20:58.456 { 00:20:58.456 "name": null, 00:20:58.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.456 "is_configured": false, 00:20:58.456 "data_offset": 0, 00:20:58.456 "data_size": 7936 00:20:58.456 }, 00:20:58.456 { 00:20:58.456 "name": "BaseBdev2", 00:20:58.456 "uuid": "33e28191-1dcf-5b95-a07d-19cc208163f0", 00:20:58.456 "is_configured": true, 00:20:58.456 "data_offset": 256, 00:20:58.456 "data_size": 7936 00:20:58.456 } 00:20:58.456 ] 00:20:58.456 }' 00:20:58.456 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:20:58.456 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:58.715 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@659 -- # verify_raid_bdev_process raid_bdev1 none none 00:20:58.715 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:58.715 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:20:58.715 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:20:58.715 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:58.715 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:58.715 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.715 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:58.715 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:58.715 14:20:35 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.975 14:20:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:20:58.975 "name": "raid_bdev1", 00:20:58.975 "uuid": "dbc9db91-87c1-426e-b257-60f4269f2ecc", 00:20:58.975 "strip_size_kb": 0, 00:20:58.975 "state": "online", 00:20:58.975 "raid_level": "raid1", 00:20:58.975 "superblock": true, 00:20:58.975 "num_base_bdevs": 2, 00:20:58.975 "num_base_bdevs_discovered": 1, 00:20:58.975 "num_base_bdevs_operational": 1, 00:20:58.975 "base_bdevs_list": [ 00:20:58.975 { 00:20:58.975 "name": null, 00:20:58.975 "uuid": "00000000-0000-0000-0000-000000000000", 00:20:58.975 "is_configured": false, 00:20:58.975 "data_offset": 0, 00:20:58.975 "data_size": 7936 00:20:58.975 }, 00:20:58.975 { 00:20:58.975 "name": "BaseBdev2", 00:20:58.975 "uuid": "33e28191-1dcf-5b95-a07d-19cc208163f0", 00:20:58.975 "is_configured": true, 00:20:58.975 "data_offset": 256, 00:20:58.975 "data_size": 7936 00:20:58.975 } 00:20:58.975 ] 00:20:58.975 }' 00:20:58.975 14:20:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:20:58.975 14:20:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:20:58.975 14:20:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:20:58.975 14:20:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:20:58.975 14:20:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:20:58.976 14:20:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.976 14:20:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:58.976 [2024-11-27 14:20:36.138010] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:20:58.976 [2024-11-27 14:20:36.153589] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000005fb0 00:20:58.976 14:20:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.976 14:20:36 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # sleep 1 00:20:58.976 [2024-11-27 14:20:36.156357] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:20:59.913 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@664 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:20:59.913 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:20:59.913 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:20:59.913 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:20:59.913 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:20:59.913 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:20:59.913 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:20:59.913 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.913 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:20:59.913 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.174 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:00.174 "name": "raid_bdev1", 00:21:00.174 "uuid": "dbc9db91-87c1-426e-b257-60f4269f2ecc", 00:21:00.174 "strip_size_kb": 0, 00:21:00.174 "state": "online", 00:21:00.174 "raid_level": "raid1", 00:21:00.174 "superblock": true, 00:21:00.174 "num_base_bdevs": 2, 00:21:00.174 "num_base_bdevs_discovered": 2, 00:21:00.174 "num_base_bdevs_operational": 2, 00:21:00.174 "process": { 00:21:00.174 "type": "rebuild", 00:21:00.174 "target": "spare", 00:21:00.174 "progress": { 00:21:00.174 "blocks": 2560, 00:21:00.174 "percent": 32 00:21:00.174 } 00:21:00.174 }, 00:21:00.174 "base_bdevs_list": [ 00:21:00.174 { 00:21:00.174 "name": "spare", 00:21:00.174 "uuid": "440a2708-f715-5ed2-8f15-126515cc6b77", 00:21:00.174 "is_configured": true, 00:21:00.174 "data_offset": 256, 00:21:00.174 "data_size": 7936 00:21:00.174 }, 00:21:00.174 { 00:21:00.174 "name": "BaseBdev2", 00:21:00.174 "uuid": "33e28191-1dcf-5b95-a07d-19cc208163f0", 00:21:00.174 "is_configured": true, 00:21:00.174 "data_offset": 256, 00:21:00.174 "data_size": 7936 00:21:00.174 } 00:21:00.174 ] 00:21:00.174 }' 00:21:00.174 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:00.174 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:00.174 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:00.174 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:00.174 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' true = true ']' 00:21:00.174 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@666 -- # '[' = false ']' 00:21:00.174 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 666: [: =: unary operator expected 00:21:00.174 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@691 -- # local num_base_bdevs_operational=2 00:21:00.174 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' raid1 = raid1 ']' 00:21:00.174 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@693 -- # '[' 2 -gt 2 ']' 00:21:00.174 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # local timeout=804 00:21:00.174 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:00.174 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:00.174 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:00.174 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:00.174 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:00.174 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:00.174 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:00.174 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:00.174 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.174 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:00.174 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.174 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:00.174 "name": "raid_bdev1", 00:21:00.174 "uuid": "dbc9db91-87c1-426e-b257-60f4269f2ecc", 00:21:00.174 "strip_size_kb": 0, 00:21:00.174 "state": "online", 00:21:00.174 "raid_level": "raid1", 00:21:00.174 "superblock": true, 00:21:00.174 "num_base_bdevs": 2, 00:21:00.174 "num_base_bdevs_discovered": 2, 00:21:00.174 "num_base_bdevs_operational": 2, 00:21:00.174 "process": { 00:21:00.174 "type": "rebuild", 00:21:00.174 "target": "spare", 00:21:00.174 "progress": { 00:21:00.174 "blocks": 2816, 00:21:00.174 "percent": 35 00:21:00.174 } 00:21:00.174 }, 00:21:00.174 "base_bdevs_list": [ 00:21:00.174 { 00:21:00.174 "name": "spare", 00:21:00.175 "uuid": "440a2708-f715-5ed2-8f15-126515cc6b77", 00:21:00.175 "is_configured": true, 00:21:00.175 "data_offset": 256, 00:21:00.175 "data_size": 7936 00:21:00.175 }, 00:21:00.175 { 00:21:00.175 "name": "BaseBdev2", 00:21:00.175 "uuid": "33e28191-1dcf-5b95-a07d-19cc208163f0", 00:21:00.175 "is_configured": true, 00:21:00.175 "data_offset": 256, 00:21:00.175 "data_size": 7936 00:21:00.175 } 00:21:00.175 ] 00:21:00.175 }' 00:21:00.175 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:00.175 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:00.175 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:00.435 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:00.435 14:20:37 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:01.373 14:20:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:01.373 14:20:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:01.373 14:20:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:01.373 14:20:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:01.373 14:20:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:01.373 14:20:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:01.373 14:20:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:01.373 14:20:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:01.373 14:20:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.373 14:20:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:01.373 14:20:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.373 14:20:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:01.373 "name": "raid_bdev1", 00:21:01.373 "uuid": "dbc9db91-87c1-426e-b257-60f4269f2ecc", 00:21:01.373 "strip_size_kb": 0, 00:21:01.373 "state": "online", 00:21:01.373 "raid_level": "raid1", 00:21:01.373 "superblock": true, 00:21:01.373 "num_base_bdevs": 2, 00:21:01.373 "num_base_bdevs_discovered": 2, 00:21:01.373 "num_base_bdevs_operational": 2, 00:21:01.373 "process": { 00:21:01.373 "type": "rebuild", 00:21:01.373 "target": "spare", 00:21:01.373 "progress": { 00:21:01.373 "blocks": 5888, 00:21:01.373 "percent": 74 00:21:01.373 } 00:21:01.373 }, 00:21:01.373 "base_bdevs_list": [ 00:21:01.373 { 00:21:01.373 "name": "spare", 00:21:01.373 "uuid": "440a2708-f715-5ed2-8f15-126515cc6b77", 00:21:01.373 "is_configured": true, 00:21:01.373 "data_offset": 256, 00:21:01.373 "data_size": 7936 00:21:01.373 }, 00:21:01.373 { 00:21:01.373 "name": "BaseBdev2", 00:21:01.373 "uuid": "33e28191-1dcf-5b95-a07d-19cc208163f0", 00:21:01.373 "is_configured": true, 00:21:01.373 "data_offset": 256, 00:21:01.373 "data_size": 7936 00:21:01.373 } 00:21:01.373 ] 00:21:01.373 }' 00:21:01.373 14:20:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:01.373 14:20:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:01.373 14:20:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:01.373 14:20:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:01.373 14:20:38 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@711 -- # sleep 1 00:21:02.019 [2024-11-27 14:20:39.280141] bdev_raid.c:2900:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:21:02.019 [2024-11-27 14:20:39.280264] bdev_raid.c:2562:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:21:02.019 [2024-11-27 14:20:39.280450] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:02.586 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # (( SECONDS < timeout )) 00:21:02.586 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:02.586 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:02.586 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:02.586 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:02.586 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:02.586 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.586 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.586 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.586 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:02.586 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.586 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:02.586 "name": "raid_bdev1", 00:21:02.586 "uuid": "dbc9db91-87c1-426e-b257-60f4269f2ecc", 00:21:02.586 "strip_size_kb": 0, 00:21:02.586 "state": "online", 00:21:02.586 "raid_level": "raid1", 00:21:02.586 "superblock": true, 00:21:02.586 "num_base_bdevs": 2, 00:21:02.586 "num_base_bdevs_discovered": 2, 00:21:02.586 "num_base_bdevs_operational": 2, 00:21:02.586 "base_bdevs_list": [ 00:21:02.586 { 00:21:02.586 "name": "spare", 00:21:02.586 "uuid": "440a2708-f715-5ed2-8f15-126515cc6b77", 00:21:02.586 "is_configured": true, 00:21:02.586 "data_offset": 256, 00:21:02.586 "data_size": 7936 00:21:02.586 }, 00:21:02.586 { 00:21:02.586 "name": "BaseBdev2", 00:21:02.586 "uuid": "33e28191-1dcf-5b95-a07d-19cc208163f0", 00:21:02.586 "is_configured": true, 00:21:02.586 "data_offset": 256, 00:21:02.586 "data_size": 7936 00:21:02.586 } 00:21:02.586 ] 00:21:02.586 }' 00:21:02.586 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:02.586 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \r\e\b\u\i\l\d ]] 00:21:02.586 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:02.586 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \s\p\a\r\e ]] 00:21:02.586 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@709 -- # break 00:21:02.586 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:02.586 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:02.586 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:02.586 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:02.586 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:02.586 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.586 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.586 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.586 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:02.586 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.586 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:02.586 "name": "raid_bdev1", 00:21:02.586 "uuid": "dbc9db91-87c1-426e-b257-60f4269f2ecc", 00:21:02.586 "strip_size_kb": 0, 00:21:02.586 "state": "online", 00:21:02.586 "raid_level": "raid1", 00:21:02.586 "superblock": true, 00:21:02.586 "num_base_bdevs": 2, 00:21:02.586 "num_base_bdevs_discovered": 2, 00:21:02.586 "num_base_bdevs_operational": 2, 00:21:02.586 "base_bdevs_list": [ 00:21:02.586 { 00:21:02.586 "name": "spare", 00:21:02.586 "uuid": "440a2708-f715-5ed2-8f15-126515cc6b77", 00:21:02.587 "is_configured": true, 00:21:02.587 "data_offset": 256, 00:21:02.587 "data_size": 7936 00:21:02.587 }, 00:21:02.587 { 00:21:02.587 "name": "BaseBdev2", 00:21:02.587 "uuid": "33e28191-1dcf-5b95-a07d-19cc208163f0", 00:21:02.587 "is_configured": true, 00:21:02.587 "data_offset": 256, 00:21:02.587 "data_size": 7936 00:21:02.587 } 00:21:02.587 ] 00:21:02.587 }' 00:21:02.587 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:02.845 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:02.845 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:02.845 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:02.845 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@716 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:02.845 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:02.845 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:02.845 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:02.845 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:02.845 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:02.845 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:02.845 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:02.845 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:02.845 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:02.845 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:02.845 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:02.845 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.845 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:02.845 14:20:39 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.845 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:02.845 "name": "raid_bdev1", 00:21:02.845 "uuid": "dbc9db91-87c1-426e-b257-60f4269f2ecc", 00:21:02.845 "strip_size_kb": 0, 00:21:02.845 "state": "online", 00:21:02.845 "raid_level": "raid1", 00:21:02.845 "superblock": true, 00:21:02.845 "num_base_bdevs": 2, 00:21:02.845 "num_base_bdevs_discovered": 2, 00:21:02.845 "num_base_bdevs_operational": 2, 00:21:02.845 "base_bdevs_list": [ 00:21:02.845 { 00:21:02.845 "name": "spare", 00:21:02.845 "uuid": "440a2708-f715-5ed2-8f15-126515cc6b77", 00:21:02.845 "is_configured": true, 00:21:02.845 "data_offset": 256, 00:21:02.845 "data_size": 7936 00:21:02.845 }, 00:21:02.845 { 00:21:02.845 "name": "BaseBdev2", 00:21:02.845 "uuid": "33e28191-1dcf-5b95-a07d-19cc208163f0", 00:21:02.845 "is_configured": true, 00:21:02.845 "data_offset": 256, 00:21:02.845 "data_size": 7936 00:21:02.845 } 00:21:02.845 ] 00:21:02.845 }' 00:21:02.845 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:02.845 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:03.414 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # rpc_cmd bdev_raid_delete raid_bdev1 00:21:03.414 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.414 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:03.414 [2024-11-27 14:20:40.498227] bdev_raid.c:2411:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:21:03.414 [2024-11-27 14:20:40.498383] bdev_raid.c:1899:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:21:03.414 [2024-11-27 14:20:40.498521] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:03.414 [2024-11-27 14:20:40.498608] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:03.414 [2024-11-27 14:20:40.498626] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007780 name raid_bdev1, state offline 00:21:03.414 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.414 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # jq length 00:21:03.414 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.414 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.414 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:03.414 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.414 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@720 -- # [[ 0 == 0 ]] 00:21:03.414 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@722 -- # '[' false = true ']' 00:21:03.414 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@743 -- # '[' true = true ']' 00:21:03.414 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # rpc_cmd bdev_passthru_delete spare 00:21:03.414 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.414 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:03.414 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.414 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@746 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:03.414 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.414 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:03.414 [2024-11-27 14:20:40.570177] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:03.414 [2024-11-27 14:20:40.570431] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:03.414 [2024-11-27 14:20:40.570473] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009f80 00:21:03.414 [2024-11-27 14:20:40.570488] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:03.414 [2024-11-27 14:20:40.573032] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:03.414 [2024-11-27 14:20:40.573073] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:03.414 [2024-11-27 14:20:40.573163] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:03.414 [2024-11-27 14:20:40.573223] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:03.414 [2024-11-27 14:20:40.573369] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:21:03.414 spare 00:21:03.414 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.414 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # rpc_cmd bdev_wait_for_examine 00:21:03.414 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.414 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:03.414 [2024-11-27 14:20:40.673491] bdev_raid.c:1734:raid_bdev_configure_cont: *DEBUG*: io device register 0x617000007b00 00:21:03.414 [2024-11-27 14:20:40.673779] bdev_raid.c:1735:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:21:03.414 [2024-11-27 14:20:40.674003] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006080 00:21:03.414 [2024-11-27 14:20:40.674284] bdev_raid.c:1764:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x617000007b00 00:21:03.414 [2024-11-27 14:20:40.674310] bdev_raid.c:1765:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x617000007b00 00:21:03.414 [2024-11-27 14:20:40.674483] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:03.414 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.414 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:21:03.414 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:03.415 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:03.415 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:03.415 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:03.415 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=2 00:21:03.415 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:03.415 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:03.415 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:03.415 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:03.415 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.415 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.415 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.415 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:03.674 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.674 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:03.674 "name": "raid_bdev1", 00:21:03.674 "uuid": "dbc9db91-87c1-426e-b257-60f4269f2ecc", 00:21:03.674 "strip_size_kb": 0, 00:21:03.674 "state": "online", 00:21:03.674 "raid_level": "raid1", 00:21:03.674 "superblock": true, 00:21:03.674 "num_base_bdevs": 2, 00:21:03.674 "num_base_bdevs_discovered": 2, 00:21:03.674 "num_base_bdevs_operational": 2, 00:21:03.674 "base_bdevs_list": [ 00:21:03.674 { 00:21:03.674 "name": "spare", 00:21:03.674 "uuid": "440a2708-f715-5ed2-8f15-126515cc6b77", 00:21:03.674 "is_configured": true, 00:21:03.674 "data_offset": 256, 00:21:03.674 "data_size": 7936 00:21:03.674 }, 00:21:03.674 { 00:21:03.674 "name": "BaseBdev2", 00:21:03.674 "uuid": "33e28191-1dcf-5b95-a07d-19cc208163f0", 00:21:03.674 "is_configured": true, 00:21:03.674 "data_offset": 256, 00:21:03.674 "data_size": 7936 00:21:03.674 } 00:21:03.674 ] 00:21:03.674 }' 00:21:03.674 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:03.674 14:20:40 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:03.933 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@750 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:03.933 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:03.933 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:03.933 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:03.933 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:03.933 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:03.933 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:03.933 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.933 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:04.191 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.191 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:04.191 "name": "raid_bdev1", 00:21:04.191 "uuid": "dbc9db91-87c1-426e-b257-60f4269f2ecc", 00:21:04.191 "strip_size_kb": 0, 00:21:04.191 "state": "online", 00:21:04.191 "raid_level": "raid1", 00:21:04.191 "superblock": true, 00:21:04.191 "num_base_bdevs": 2, 00:21:04.191 "num_base_bdevs_discovered": 2, 00:21:04.191 "num_base_bdevs_operational": 2, 00:21:04.191 "base_bdevs_list": [ 00:21:04.191 { 00:21:04.191 "name": "spare", 00:21:04.191 "uuid": "440a2708-f715-5ed2-8f15-126515cc6b77", 00:21:04.191 "is_configured": true, 00:21:04.191 "data_offset": 256, 00:21:04.191 "data_size": 7936 00:21:04.191 }, 00:21:04.191 { 00:21:04.191 "name": "BaseBdev2", 00:21:04.191 "uuid": "33e28191-1dcf-5b95-a07d-19cc208163f0", 00:21:04.191 "is_configured": true, 00:21:04.191 "data_offset": 256, 00:21:04.191 "data_size": 7936 00:21:04.191 } 00:21:04.191 ] 00:21:04.191 }' 00:21:04.191 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:04.191 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:04.191 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:04.191 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:04.191 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # jq -r '.[].base_bdevs_list[0].name' 00:21:04.191 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.191 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.191 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:04.191 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.191 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@751 -- # [[ spare == \s\p\a\r\e ]] 00:21:04.191 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # rpc_cmd bdev_raid_remove_base_bdev spare 00:21:04.191 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.191 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:04.191 [2024-11-27 14:20:41.426897] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:04.191 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.191 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:04.191 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:04.191 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:04.191 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:04.191 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:04.191 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:04.191 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:04.191 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:04.191 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:04.191 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:04.191 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:04.191 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:04.191 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.192 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:04.192 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.450 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:04.450 "name": "raid_bdev1", 00:21:04.450 "uuid": "dbc9db91-87c1-426e-b257-60f4269f2ecc", 00:21:04.450 "strip_size_kb": 0, 00:21:04.450 "state": "online", 00:21:04.450 "raid_level": "raid1", 00:21:04.450 "superblock": true, 00:21:04.450 "num_base_bdevs": 2, 00:21:04.450 "num_base_bdevs_discovered": 1, 00:21:04.450 "num_base_bdevs_operational": 1, 00:21:04.450 "base_bdevs_list": [ 00:21:04.450 { 00:21:04.450 "name": null, 00:21:04.450 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:04.450 "is_configured": false, 00:21:04.450 "data_offset": 0, 00:21:04.450 "data_size": 7936 00:21:04.450 }, 00:21:04.450 { 00:21:04.450 "name": "BaseBdev2", 00:21:04.450 "uuid": "33e28191-1dcf-5b95-a07d-19cc208163f0", 00:21:04.450 "is_configured": true, 00:21:04.450 "data_offset": 256, 00:21:04.450 "data_size": 7936 00:21:04.450 } 00:21:04.450 ] 00:21:04.450 }' 00:21:04.450 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:04.450 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:04.709 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 spare 00:21:04.709 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.709 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:04.709 [2024-11-27 14:20:41.958980] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:04.709 [2024-11-27 14:20:41.959238] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:04.709 [2024-11-27 14:20:41.959264] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:04.709 [2024-11-27 14:20:41.959347] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:04.709 [2024-11-27 14:20:41.975137] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006150 00:21:04.709 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.709 14:20:41 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@757 -- # sleep 1 00:21:04.709 [2024-11-27 14:20:41.977739] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:06.084 14:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@758 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:06.084 14:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:06.084 14:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:06.084 14:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:06.084 14:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:06.084 14:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.084 14:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.084 14:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.085 14:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:06.085 14:20:42 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.085 14:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:06.085 "name": "raid_bdev1", 00:21:06.085 "uuid": "dbc9db91-87c1-426e-b257-60f4269f2ecc", 00:21:06.085 "strip_size_kb": 0, 00:21:06.085 "state": "online", 00:21:06.085 "raid_level": "raid1", 00:21:06.085 "superblock": true, 00:21:06.085 "num_base_bdevs": 2, 00:21:06.085 "num_base_bdevs_discovered": 2, 00:21:06.085 "num_base_bdevs_operational": 2, 00:21:06.085 "process": { 00:21:06.085 "type": "rebuild", 00:21:06.085 "target": "spare", 00:21:06.085 "progress": { 00:21:06.085 "blocks": 2560, 00:21:06.085 "percent": 32 00:21:06.085 } 00:21:06.085 }, 00:21:06.085 "base_bdevs_list": [ 00:21:06.085 { 00:21:06.085 "name": "spare", 00:21:06.085 "uuid": "440a2708-f715-5ed2-8f15-126515cc6b77", 00:21:06.085 "is_configured": true, 00:21:06.085 "data_offset": 256, 00:21:06.085 "data_size": 7936 00:21:06.085 }, 00:21:06.085 { 00:21:06.085 "name": "BaseBdev2", 00:21:06.085 "uuid": "33e28191-1dcf-5b95-a07d-19cc208163f0", 00:21:06.085 "is_configured": true, 00:21:06.085 "data_offset": 256, 00:21:06.085 "data_size": 7936 00:21:06.085 } 00:21:06.085 ] 00:21:06.085 }' 00:21:06.085 14:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:06.085 14:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:06.085 14:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:06.085 14:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:06.085 14:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # rpc_cmd bdev_passthru_delete spare 00:21:06.085 14:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.085 14:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:06.085 [2024-11-27 14:20:43.146974] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:06.085 [2024-11-27 14:20:43.186897] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:06.085 [2024-11-27 14:20:43.187259] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:06.085 [2024-11-27 14:20:43.187522] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:06.085 [2024-11-27 14:20:43.187582] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:06.085 14:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.085 14:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:06.085 14:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:06.085 14:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:06.085 14:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:06.085 14:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:06.085 14:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:06.085 14:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:06.085 14:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:06.085 14:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:06.085 14:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:06.085 14:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:06.085 14:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:06.085 14:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.085 14:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:06.085 14:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.085 14:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:06.085 "name": "raid_bdev1", 00:21:06.085 "uuid": "dbc9db91-87c1-426e-b257-60f4269f2ecc", 00:21:06.085 "strip_size_kb": 0, 00:21:06.085 "state": "online", 00:21:06.085 "raid_level": "raid1", 00:21:06.085 "superblock": true, 00:21:06.085 "num_base_bdevs": 2, 00:21:06.085 "num_base_bdevs_discovered": 1, 00:21:06.085 "num_base_bdevs_operational": 1, 00:21:06.085 "base_bdevs_list": [ 00:21:06.085 { 00:21:06.085 "name": null, 00:21:06.085 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:06.085 "is_configured": false, 00:21:06.085 "data_offset": 0, 00:21:06.085 "data_size": 7936 00:21:06.085 }, 00:21:06.085 { 00:21:06.085 "name": "BaseBdev2", 00:21:06.085 "uuid": "33e28191-1dcf-5b95-a07d-19cc208163f0", 00:21:06.085 "is_configured": true, 00:21:06.085 "data_offset": 256, 00:21:06.085 "data_size": 7936 00:21:06.085 } 00:21:06.085 ] 00:21:06.085 }' 00:21:06.085 14:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:06.085 14:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:06.652 14:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # rpc_cmd bdev_passthru_create -b spare_delay -p spare 00:21:06.652 14:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.652 14:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:06.652 [2024-11-27 14:20:43.760385] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:21:06.652 [2024-11-27 14:20:43.760476] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:06.652 [2024-11-27 14:20:43.760514] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000a880 00:21:06.652 [2024-11-27 14:20:43.760532] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:06.652 [2024-11-27 14:20:43.760797] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:06.652 [2024-11-27 14:20:43.760826] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:21:06.652 [2024-11-27 14:20:43.760902] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:21:06.652 [2024-11-27 14:20:43.760929] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:21:06.652 [2024-11-27 14:20:43.760946] bdev_raid.c:3758:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:21:06.652 [2024-11-27 14:20:43.760979] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:21:06.652 spare 00:21:06.652 [2024-11-27 14:20:43.776795] bdev_raid.c: 265:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x60d000006220 00:21:06.652 14:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.652 14:20:43 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@764 -- # sleep 1 00:21:06.652 [2024-11-27 14:20:43.779301] bdev_raid.c:2935:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:21:07.591 14:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@765 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:21:07.591 14:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:07.591 14:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=rebuild 00:21:07.591 14:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=spare 00:21:07.591 14:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:07.591 14:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.591 14:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.591 14:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:07.591 14:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.591 14:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.591 14:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:07.591 "name": "raid_bdev1", 00:21:07.591 "uuid": "dbc9db91-87c1-426e-b257-60f4269f2ecc", 00:21:07.591 "strip_size_kb": 0, 00:21:07.591 "state": "online", 00:21:07.591 "raid_level": "raid1", 00:21:07.591 "superblock": true, 00:21:07.591 "num_base_bdevs": 2, 00:21:07.591 "num_base_bdevs_discovered": 2, 00:21:07.591 "num_base_bdevs_operational": 2, 00:21:07.591 "process": { 00:21:07.591 "type": "rebuild", 00:21:07.591 "target": "spare", 00:21:07.591 "progress": { 00:21:07.591 "blocks": 2560, 00:21:07.591 "percent": 32 00:21:07.591 } 00:21:07.591 }, 00:21:07.591 "base_bdevs_list": [ 00:21:07.591 { 00:21:07.591 "name": "spare", 00:21:07.591 "uuid": "440a2708-f715-5ed2-8f15-126515cc6b77", 00:21:07.591 "is_configured": true, 00:21:07.591 "data_offset": 256, 00:21:07.591 "data_size": 7936 00:21:07.591 }, 00:21:07.591 { 00:21:07.591 "name": "BaseBdev2", 00:21:07.591 "uuid": "33e28191-1dcf-5b95-a07d-19cc208163f0", 00:21:07.591 "is_configured": true, 00:21:07.591 "data_offset": 256, 00:21:07.591 "data_size": 7936 00:21:07.591 } 00:21:07.591 ] 00:21:07.591 }' 00:21:07.591 14:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:07.850 14:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:21:07.850 14:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:07.850 14:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ spare == \s\p\a\r\e ]] 00:21:07.850 14:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # rpc_cmd bdev_passthru_delete spare 00:21:07.850 14:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.850 14:20:44 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:07.850 [2024-11-27 14:20:44.952933] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:07.850 [2024-11-27 14:20:44.988797] bdev_raid.c:2571:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: No such device 00:21:07.850 [2024-11-27 14:20:44.989056] bdev_raid.c: 345:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:21:07.850 [2024-11-27 14:20:44.989089] bdev_raid.c:2175:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:21:07.851 [2024-11-27 14:20:44.989102] bdev_raid.c:2509:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: No such device 00:21:07.851 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.851 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@769 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:07.851 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:07.851 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:07.851 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:07.851 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:07.851 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:07.851 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:07.851 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:07.851 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:07.851 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:07.851 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:07.851 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:07.851 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.851 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:07.851 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.851 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:07.851 "name": "raid_bdev1", 00:21:07.851 "uuid": "dbc9db91-87c1-426e-b257-60f4269f2ecc", 00:21:07.851 "strip_size_kb": 0, 00:21:07.851 "state": "online", 00:21:07.851 "raid_level": "raid1", 00:21:07.851 "superblock": true, 00:21:07.851 "num_base_bdevs": 2, 00:21:07.851 "num_base_bdevs_discovered": 1, 00:21:07.851 "num_base_bdevs_operational": 1, 00:21:07.851 "base_bdevs_list": [ 00:21:07.851 { 00:21:07.851 "name": null, 00:21:07.851 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:07.851 "is_configured": false, 00:21:07.851 "data_offset": 0, 00:21:07.851 "data_size": 7936 00:21:07.851 }, 00:21:07.851 { 00:21:07.851 "name": "BaseBdev2", 00:21:07.851 "uuid": "33e28191-1dcf-5b95-a07d-19cc208163f0", 00:21:07.851 "is_configured": true, 00:21:07.851 "data_offset": 256, 00:21:07.851 "data_size": 7936 00:21:07.851 } 00:21:07.851 ] 00:21:07.851 }' 00:21:07.851 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:07.851 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.418 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@770 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:08.418 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:08.418 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:08.418 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:08.418 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:08.418 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:08.418 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:08.418 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.418 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.418 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.418 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:08.418 "name": "raid_bdev1", 00:21:08.418 "uuid": "dbc9db91-87c1-426e-b257-60f4269f2ecc", 00:21:08.418 "strip_size_kb": 0, 00:21:08.418 "state": "online", 00:21:08.418 "raid_level": "raid1", 00:21:08.418 "superblock": true, 00:21:08.418 "num_base_bdevs": 2, 00:21:08.418 "num_base_bdevs_discovered": 1, 00:21:08.418 "num_base_bdevs_operational": 1, 00:21:08.418 "base_bdevs_list": [ 00:21:08.418 { 00:21:08.418 "name": null, 00:21:08.418 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:08.418 "is_configured": false, 00:21:08.418 "data_offset": 0, 00:21:08.418 "data_size": 7936 00:21:08.418 }, 00:21:08.418 { 00:21:08.418 "name": "BaseBdev2", 00:21:08.418 "uuid": "33e28191-1dcf-5b95-a07d-19cc208163f0", 00:21:08.418 "is_configured": true, 00:21:08.418 "data_offset": 256, 00:21:08.418 "data_size": 7936 00:21:08.418 } 00:21:08.418 ] 00:21:08.418 }' 00:21:08.418 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:08.418 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:08.418 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:08.676 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:08.676 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # rpc_cmd bdev_passthru_delete BaseBdev1 00:21:08.676 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.676 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.676 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.676 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # rpc_cmd bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:21:08.676 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.676 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:08.676 [2024-11-27 14:20:45.746004] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:21:08.676 [2024-11-27 14:20:45.746278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:21:08.676 [2024-11-27 14:20:45.746320] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x61600000ae80 00:21:08.676 [2024-11-27 14:20:45.746335] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:21:08.676 [2024-11-27 14:20:45.746553] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:21:08.676 [2024-11-27 14:20:45.746576] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:21:08.676 [2024-11-27 14:20:45.746645] bdev_raid.c:3907:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:21:08.676 [2024-11-27 14:20:45.746664] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:08.676 [2024-11-27 14:20:45.746678] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:08.676 [2024-11-27 14:20:45.746690] bdev_raid.c:3894:raid_bdev_examine_done: *ERROR*: Failed to examine bdev BaseBdev1: Invalid argument 00:21:08.676 BaseBdev1 00:21:08.676 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.676 14:20:45 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # sleep 1 00:21:09.612 14:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:09.613 14:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:09.613 14:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:09.613 14:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:09.613 14:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:09.613 14:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:09.613 14:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:09.613 14:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:09.613 14:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:09.613 14:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:09.613 14:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:09.613 14:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:09.613 14:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.613 14:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:09.613 14:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.613 14:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:09.613 "name": "raid_bdev1", 00:21:09.613 "uuid": "dbc9db91-87c1-426e-b257-60f4269f2ecc", 00:21:09.613 "strip_size_kb": 0, 00:21:09.613 "state": "online", 00:21:09.613 "raid_level": "raid1", 00:21:09.613 "superblock": true, 00:21:09.613 "num_base_bdevs": 2, 00:21:09.613 "num_base_bdevs_discovered": 1, 00:21:09.613 "num_base_bdevs_operational": 1, 00:21:09.613 "base_bdevs_list": [ 00:21:09.613 { 00:21:09.613 "name": null, 00:21:09.613 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:09.613 "is_configured": false, 00:21:09.613 "data_offset": 0, 00:21:09.613 "data_size": 7936 00:21:09.613 }, 00:21:09.613 { 00:21:09.613 "name": "BaseBdev2", 00:21:09.613 "uuid": "33e28191-1dcf-5b95-a07d-19cc208163f0", 00:21:09.613 "is_configured": true, 00:21:09.613 "data_offset": 256, 00:21:09.613 "data_size": 7936 00:21:09.613 } 00:21:09.613 ] 00:21:09.613 }' 00:21:09.613 14:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:09.613 14:20:46 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:10.181 14:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:10.181 14:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:10.181 14:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:10.181 14:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:10.181 14:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:10.181 14:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:10.181 14:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:10.181 14:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.181 14:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:10.181 14:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.181 14:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:10.181 "name": "raid_bdev1", 00:21:10.181 "uuid": "dbc9db91-87c1-426e-b257-60f4269f2ecc", 00:21:10.181 "strip_size_kb": 0, 00:21:10.181 "state": "online", 00:21:10.181 "raid_level": "raid1", 00:21:10.181 "superblock": true, 00:21:10.181 "num_base_bdevs": 2, 00:21:10.181 "num_base_bdevs_discovered": 1, 00:21:10.181 "num_base_bdevs_operational": 1, 00:21:10.181 "base_bdevs_list": [ 00:21:10.181 { 00:21:10.181 "name": null, 00:21:10.181 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:10.181 "is_configured": false, 00:21:10.181 "data_offset": 0, 00:21:10.181 "data_size": 7936 00:21:10.181 }, 00:21:10.181 { 00:21:10.181 "name": "BaseBdev2", 00:21:10.181 "uuid": "33e28191-1dcf-5b95-a07d-19cc208163f0", 00:21:10.182 "is_configured": true, 00:21:10.182 "data_offset": 256, 00:21:10.182 "data_size": 7936 00:21:10.182 } 00:21:10.182 ] 00:21:10.182 }' 00:21:10.182 14:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:10.182 14:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:10.182 14:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:10.182 14:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:10.182 14:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # NOT rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:10.182 14:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@652 -- # local es=0 00:21:10.182 14:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:10.182 14:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:21:10.182 14:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:10.182 14:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:21:10.182 14:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:10.182 14:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # rpc_cmd bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:21:10.182 14:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.182 14:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:10.182 [2024-11-27 14:20:47.430746] bdev_raid.c:3326:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:21:10.182 [2024-11-27 14:20:47.431123] bdev_raid.c:3700:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:21:10.182 [2024-11-27 14:20:47.431286] bdev_raid.c:3719:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:21:10.182 request: 00:21:10.182 { 00:21:10.182 "base_bdev": "BaseBdev1", 00:21:10.182 "raid_bdev": "raid_bdev1", 00:21:10.182 "method": "bdev_raid_add_base_bdev", 00:21:10.182 "req_id": 1 00:21:10.182 } 00:21:10.182 Got JSON-RPC error response 00:21:10.182 response: 00:21:10.182 { 00:21:10.182 "code": -22, 00:21:10.182 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:21:10.182 } 00:21:10.182 14:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:21:10.182 14:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@655 -- # es=1 00:21:10.182 14:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:10.182 14:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:10.182 14:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:10.182 14:20:47 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # sleep 1 00:21:11.595 14:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@780 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:21:11.595 14:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@103 -- # local raid_bdev_name=raid_bdev1 00:21:11.595 14:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@104 -- # local expected_state=online 00:21:11.595 14:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@105 -- # local raid_level=raid1 00:21:11.595 14:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@106 -- # local strip_size=0 00:21:11.595 14:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@107 -- # local num_base_bdevs_operational=1 00:21:11.595 14:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@108 -- # local raid_bdev_info 00:21:11.595 14:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@109 -- # local num_base_bdevs 00:21:11.595 14:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@110 -- # local num_base_bdevs_discovered 00:21:11.595 14:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@111 -- # local tmp 00:21:11.595 14:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.595 14:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.595 14:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.595 14:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.595 14:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.595 14:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@113 -- # raid_bdev_info='{ 00:21:11.595 "name": "raid_bdev1", 00:21:11.595 "uuid": "dbc9db91-87c1-426e-b257-60f4269f2ecc", 00:21:11.595 "strip_size_kb": 0, 00:21:11.595 "state": "online", 00:21:11.595 "raid_level": "raid1", 00:21:11.595 "superblock": true, 00:21:11.595 "num_base_bdevs": 2, 00:21:11.595 "num_base_bdevs_discovered": 1, 00:21:11.595 "num_base_bdevs_operational": 1, 00:21:11.595 "base_bdevs_list": [ 00:21:11.595 { 00:21:11.595 "name": null, 00:21:11.595 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.595 "is_configured": false, 00:21:11.595 "data_offset": 0, 00:21:11.595 "data_size": 7936 00:21:11.595 }, 00:21:11.596 { 00:21:11.596 "name": "BaseBdev2", 00:21:11.596 "uuid": "33e28191-1dcf-5b95-a07d-19cc208163f0", 00:21:11.596 "is_configured": true, 00:21:11.596 "data_offset": 256, 00:21:11.596 "data_size": 7936 00:21:11.596 } 00:21:11.596 ] 00:21:11.596 }' 00:21:11.596 14:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@115 -- # xtrace_disable 00:21:11.596 14:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.855 14:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@781 -- # verify_raid_bdev_process raid_bdev1 none none 00:21:11.855 14:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@169 -- # local raid_bdev_name=raid_bdev1 00:21:11.855 14:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@170 -- # local process_type=none 00:21:11.855 14:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@171 -- # local target=none 00:21:11.855 14:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@172 -- # local raid_bdev_info 00:21:11.855 14:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # rpc_cmd bdev_raid_get_bdevs all 00:21:11.855 14:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:21:11.855 14:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.855 14:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:11.855 14:20:48 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.855 14:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@174 -- # raid_bdev_info='{ 00:21:11.855 "name": "raid_bdev1", 00:21:11.855 "uuid": "dbc9db91-87c1-426e-b257-60f4269f2ecc", 00:21:11.855 "strip_size_kb": 0, 00:21:11.855 "state": "online", 00:21:11.855 "raid_level": "raid1", 00:21:11.855 "superblock": true, 00:21:11.855 "num_base_bdevs": 2, 00:21:11.855 "num_base_bdevs_discovered": 1, 00:21:11.855 "num_base_bdevs_operational": 1, 00:21:11.855 "base_bdevs_list": [ 00:21:11.855 { 00:21:11.855 "name": null, 00:21:11.855 "uuid": "00000000-0000-0000-0000-000000000000", 00:21:11.855 "is_configured": false, 00:21:11.855 "data_offset": 0, 00:21:11.855 "data_size": 7936 00:21:11.855 }, 00:21:11.855 { 00:21:11.855 "name": "BaseBdev2", 00:21:11.855 "uuid": "33e28191-1dcf-5b95-a07d-19cc208163f0", 00:21:11.855 "is_configured": true, 00:21:11.855 "data_offset": 256, 00:21:11.855 "data_size": 7936 00:21:11.855 } 00:21:11.855 ] 00:21:11.855 }' 00:21:11.855 14:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # jq -r '.process.type // "none"' 00:21:11.855 14:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@176 -- # [[ none == \n\o\n\e ]] 00:21:11.855 14:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # jq -r '.process.target // "none"' 00:21:11.855 14:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@177 -- # [[ none == \n\o\n\e ]] 00:21:11.855 14:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # killprocess 89406 00:21:11.855 14:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@954 -- # '[' -z 89406 ']' 00:21:11.855 14:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # kill -0 89406 00:21:11.855 14:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # uname 00:21:11.855 14:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:11.855 14:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 89406 00:21:12.114 killing process with pid 89406 00:21:12.115 Received shutdown signal, test time was about 60.000000 seconds 00:21:12.115 00:21:12.115 Latency(us) 00:21:12.115 [2024-11-27T14:20:49.393Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:12.115 [2024-11-27T14:20:49.393Z] =================================================================================================================== 00:21:12.115 [2024-11-27T14:20:49.393Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:12.115 14:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:12.115 14:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:12.115 14:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # echo 'killing process with pid 89406' 00:21:12.115 14:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@973 -- # kill 89406 00:21:12.115 14:20:49 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@978 -- # wait 89406 00:21:12.115 [2024-11-27 14:20:49.143255] bdev_raid.c:1387:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:21:12.115 [2024-11-27 14:20:49.143413] bdev_raid.c: 492:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:21:12.115 [2024-11-27 14:20:49.143473] bdev_raid.c: 469:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:21:12.115 [2024-11-27 14:20:49.143490] bdev_raid.c: 380:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x617000007b00 name raid_bdev1, state offline 00:21:12.374 [2024-11-27 14:20:49.404462] bdev_raid.c:1413:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:21:13.310 ************************************ 00:21:13.310 END TEST raid_rebuild_test_sb_md_interleaved 00:21:13.310 ************************************ 00:21:13.310 14:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@786 -- # return 0 00:21:13.310 00:21:13.310 real 0m18.684s 00:21:13.310 user 0m25.525s 00:21:13.310 sys 0m1.553s 00:21:13.310 14:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:13.310 14:20:50 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:21:13.310 14:20:50 bdev_raid -- bdev/bdev_raid.sh@1015 -- # trap - EXIT 00:21:13.310 14:20:50 bdev_raid -- bdev/bdev_raid.sh@1016 -- # cleanup 00:21:13.310 14:20:50 bdev_raid -- bdev/bdev_raid.sh@56 -- # '[' -n 89406 ']' 00:21:13.310 14:20:50 bdev_raid -- bdev/bdev_raid.sh@56 -- # ps -p 89406 00:21:13.310 14:20:50 bdev_raid -- bdev/bdev_raid.sh@60 -- # rm -rf /raidtest 00:21:13.310 ************************************ 00:21:13.310 END TEST bdev_raid 00:21:13.310 ************************************ 00:21:13.310 00:21:13.310 real 13m6.472s 00:21:13.310 user 18m33.655s 00:21:13.310 sys 1m46.447s 00:21:13.310 14:20:50 bdev_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:13.310 14:20:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:21:13.310 14:20:50 -- spdk/autotest.sh@190 -- # run_test spdkcli_raid /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:21:13.310 14:20:50 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:13.310 14:20:50 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:13.310 14:20:50 -- common/autotest_common.sh@10 -- # set +x 00:21:13.310 ************************************ 00:21:13.310 START TEST spdkcli_raid 00:21:13.310 ************************************ 00:21:13.310 14:20:50 spdkcli_raid -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:21:13.310 * Looking for test storage... 00:21:13.570 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:13.570 14:20:50 spdkcli_raid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:13.570 14:20:50 spdkcli_raid -- common/autotest_common.sh@1693 -- # lcov --version 00:21:13.570 14:20:50 spdkcli_raid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:13.570 14:20:50 spdkcli_raid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:13.570 14:20:50 spdkcli_raid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:13.570 14:20:50 spdkcli_raid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:13.570 14:20:50 spdkcli_raid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:13.570 14:20:50 spdkcli_raid -- scripts/common.sh@336 -- # IFS=.-: 00:21:13.570 14:20:50 spdkcli_raid -- scripts/common.sh@336 -- # read -ra ver1 00:21:13.570 14:20:50 spdkcli_raid -- scripts/common.sh@337 -- # IFS=.-: 00:21:13.570 14:20:50 spdkcli_raid -- scripts/common.sh@337 -- # read -ra ver2 00:21:13.570 14:20:50 spdkcli_raid -- scripts/common.sh@338 -- # local 'op=<' 00:21:13.570 14:20:50 spdkcli_raid -- scripts/common.sh@340 -- # ver1_l=2 00:21:13.570 14:20:50 spdkcli_raid -- scripts/common.sh@341 -- # ver2_l=1 00:21:13.570 14:20:50 spdkcli_raid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:13.570 14:20:50 spdkcli_raid -- scripts/common.sh@344 -- # case "$op" in 00:21:13.570 14:20:50 spdkcli_raid -- scripts/common.sh@345 -- # : 1 00:21:13.570 14:20:50 spdkcli_raid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:13.570 14:20:50 spdkcli_raid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:13.570 14:20:50 spdkcli_raid -- scripts/common.sh@365 -- # decimal 1 00:21:13.570 14:20:50 spdkcli_raid -- scripts/common.sh@353 -- # local d=1 00:21:13.570 14:20:50 spdkcli_raid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:13.570 14:20:50 spdkcli_raid -- scripts/common.sh@355 -- # echo 1 00:21:13.570 14:20:50 spdkcli_raid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:13.570 14:20:50 spdkcli_raid -- scripts/common.sh@366 -- # decimal 2 00:21:13.570 14:20:50 spdkcli_raid -- scripts/common.sh@353 -- # local d=2 00:21:13.570 14:20:50 spdkcli_raid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:13.570 14:20:50 spdkcli_raid -- scripts/common.sh@355 -- # echo 2 00:21:13.570 14:20:50 spdkcli_raid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:13.570 14:20:50 spdkcli_raid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:13.570 14:20:50 spdkcli_raid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:13.570 14:20:50 spdkcli_raid -- scripts/common.sh@368 -- # return 0 00:21:13.570 14:20:50 spdkcli_raid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:13.570 14:20:50 spdkcli_raid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:13.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.570 --rc genhtml_branch_coverage=1 00:21:13.570 --rc genhtml_function_coverage=1 00:21:13.570 --rc genhtml_legend=1 00:21:13.570 --rc geninfo_all_blocks=1 00:21:13.570 --rc geninfo_unexecuted_blocks=1 00:21:13.570 00:21:13.570 ' 00:21:13.570 14:20:50 spdkcli_raid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:13.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.570 --rc genhtml_branch_coverage=1 00:21:13.570 --rc genhtml_function_coverage=1 00:21:13.570 --rc genhtml_legend=1 00:21:13.570 --rc geninfo_all_blocks=1 00:21:13.570 --rc geninfo_unexecuted_blocks=1 00:21:13.570 00:21:13.570 ' 00:21:13.570 14:20:50 spdkcli_raid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:13.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.570 --rc genhtml_branch_coverage=1 00:21:13.570 --rc genhtml_function_coverage=1 00:21:13.570 --rc genhtml_legend=1 00:21:13.570 --rc geninfo_all_blocks=1 00:21:13.570 --rc geninfo_unexecuted_blocks=1 00:21:13.570 00:21:13.570 ' 00:21:13.570 14:20:50 spdkcli_raid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:13.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:13.570 --rc genhtml_branch_coverage=1 00:21:13.570 --rc genhtml_function_coverage=1 00:21:13.570 --rc genhtml_legend=1 00:21:13.570 --rc geninfo_all_blocks=1 00:21:13.570 --rc geninfo_unexecuted_blocks=1 00:21:13.570 00:21:13.570 ' 00:21:13.570 14:20:50 spdkcli_raid -- spdkcli/raid.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:21:13.570 14:20:50 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:21:13.570 14:20:50 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:21:13.570 14:20:50 spdkcli_raid -- spdkcli/raid.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/iscsi_tgt/common.sh 00:21:13.570 14:20:50 spdkcli_raid -- iscsi_tgt/common.sh@9 -- # ISCSI_BRIDGE=iscsi_br 00:21:13.570 14:20:50 spdkcli_raid -- iscsi_tgt/common.sh@10 -- # INITIATOR_INTERFACE=spdk_init_int 00:21:13.570 14:20:50 spdkcli_raid -- iscsi_tgt/common.sh@11 -- # INITIATOR_BRIDGE=init_br 00:21:13.570 14:20:50 spdkcli_raid -- iscsi_tgt/common.sh@12 -- # TARGET_NAMESPACE=spdk_iscsi_ns 00:21:13.570 14:20:50 spdkcli_raid -- iscsi_tgt/common.sh@13 -- # TARGET_NS_CMD=(ip netns exec "$TARGET_NAMESPACE") 00:21:13.570 14:20:50 spdkcli_raid -- iscsi_tgt/common.sh@14 -- # TARGET_INTERFACE=spdk_tgt_int 00:21:13.570 14:20:50 spdkcli_raid -- iscsi_tgt/common.sh@15 -- # TARGET_INTERFACE2=spdk_tgt_int2 00:21:13.570 14:20:50 spdkcli_raid -- iscsi_tgt/common.sh@16 -- # TARGET_BRIDGE=tgt_br 00:21:13.570 14:20:50 spdkcli_raid -- iscsi_tgt/common.sh@17 -- # TARGET_BRIDGE2=tgt_br2 00:21:13.570 14:20:50 spdkcli_raid -- iscsi_tgt/common.sh@20 -- # TARGET_IP=10.0.0.1 00:21:13.570 14:20:50 spdkcli_raid -- iscsi_tgt/common.sh@21 -- # TARGET_IP2=10.0.0.3 00:21:13.571 14:20:50 spdkcli_raid -- iscsi_tgt/common.sh@22 -- # INITIATOR_IP=10.0.0.2 00:21:13.571 14:20:50 spdkcli_raid -- iscsi_tgt/common.sh@23 -- # ISCSI_PORT=3260 00:21:13.571 14:20:50 spdkcli_raid -- iscsi_tgt/common.sh@24 -- # NETMASK=10.0.0.2/32 00:21:13.571 14:20:50 spdkcli_raid -- iscsi_tgt/common.sh@25 -- # INITIATOR_TAG=2 00:21:13.571 14:20:50 spdkcli_raid -- iscsi_tgt/common.sh@26 -- # INITIATOR_NAME=ANY 00:21:13.571 14:20:50 spdkcli_raid -- iscsi_tgt/common.sh@27 -- # PORTAL_TAG=1 00:21:13.571 14:20:50 spdkcli_raid -- iscsi_tgt/common.sh@28 -- # ISCSI_APP=("${TARGET_NS_CMD[@]}" "${ISCSI_APP[@]}") 00:21:13.571 14:20:50 spdkcli_raid -- iscsi_tgt/common.sh@29 -- # ISCSI_TEST_CORE_MASK=0xF 00:21:13.571 14:20:50 spdkcli_raid -- spdkcli/raid.sh@12 -- # MATCH_FILE=spdkcli_raid.test 00:21:13.571 14:20:50 spdkcli_raid -- spdkcli/raid.sh@13 -- # SPDKCLI_BRANCH=/bdevs 00:21:13.571 14:20:50 spdkcli_raid -- spdkcli/raid.sh@14 -- # dirname /home/vagrant/spdk_repo/spdk/test/spdkcli/raid.sh 00:21:13.571 14:20:50 spdkcli_raid -- spdkcli/raid.sh@14 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:13.571 14:20:50 spdkcli_raid -- spdkcli/raid.sh@14 -- # testdir=/home/vagrant/spdk_repo/spdk/test/spdkcli 00:21:13.571 14:20:50 spdkcli_raid -- spdkcli/raid.sh@15 -- # . /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:21:13.571 14:20:50 spdkcli_raid -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:21:13.571 14:20:50 spdkcli_raid -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:21:13.571 14:20:50 spdkcli_raid -- spdkcli/raid.sh@17 -- # trap cleanup EXIT 00:21:13.571 14:20:50 spdkcli_raid -- spdkcli/raid.sh@19 -- # timing_enter run_spdk_tgt 00:21:13.571 14:20:50 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:13.571 14:20:50 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:13.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:13.571 14:20:50 spdkcli_raid -- spdkcli/raid.sh@20 -- # run_spdk_tgt 00:21:13.571 14:20:50 spdkcli_raid -- spdkcli/common.sh@27 -- # spdk_tgt_pid=90092 00:21:13.571 14:20:50 spdkcli_raid -- spdkcli/common.sh@28 -- # waitforlisten 90092 00:21:13.571 14:20:50 spdkcli_raid -- common/autotest_common.sh@835 -- # '[' -z 90092 ']' 00:21:13.571 14:20:50 spdkcli_raid -- spdkcli/common.sh@26 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:21:13.571 14:20:50 spdkcli_raid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:13.571 14:20:50 spdkcli_raid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:13.571 14:20:50 spdkcli_raid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:13.571 14:20:50 spdkcli_raid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:13.571 14:20:50 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:13.830 [2024-11-27 14:20:50.861013] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:21:13.830 [2024-11-27 14:20:50.861219] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90092 ] 00:21:13.830 [2024-11-27 14:20:51.059369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:14.089 [2024-11-27 14:20:51.222102] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:14.089 [2024-11-27 14:20:51.222131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:15.025 14:20:52 spdkcli_raid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:15.025 14:20:52 spdkcli_raid -- common/autotest_common.sh@868 -- # return 0 00:21:15.025 14:20:52 spdkcli_raid -- spdkcli/raid.sh@21 -- # timing_exit run_spdk_tgt 00:21:15.025 14:20:52 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:15.025 14:20:52 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:15.025 14:20:52 spdkcli_raid -- spdkcli/raid.sh@23 -- # timing_enter spdkcli_create_malloc 00:21:15.025 14:20:52 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:15.025 14:20:52 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:15.025 14:20:52 spdkcli_raid -- spdkcli/raid.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 8 512 Malloc1'\'' '\''Malloc1'\'' True 00:21:15.025 '\''/bdevs/malloc create 8 512 Malloc2'\'' '\''Malloc2'\'' True 00:21:15.025 ' 00:21:16.929 Executing command: ['/bdevs/malloc create 8 512 Malloc1', 'Malloc1', True] 00:21:16.929 Executing command: ['/bdevs/malloc create 8 512 Malloc2', 'Malloc2', True] 00:21:16.929 14:20:53 spdkcli_raid -- spdkcli/raid.sh@27 -- # timing_exit spdkcli_create_malloc 00:21:16.929 14:20:53 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:16.929 14:20:53 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:16.929 14:20:53 spdkcli_raid -- spdkcli/raid.sh@29 -- # timing_enter spdkcli_create_raid 00:21:16.929 14:20:53 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:16.929 14:20:53 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:16.929 14:20:53 spdkcli_raid -- spdkcli/raid.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4'\'' '\''testraid'\'' True 00:21:16.929 ' 00:21:17.867 Executing command: ['/bdevs/raid_volume create testraid 0 "Malloc1 Malloc2" 4', 'testraid', True] 00:21:18.127 14:20:55 spdkcli_raid -- spdkcli/raid.sh@32 -- # timing_exit spdkcli_create_raid 00:21:18.127 14:20:55 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:18.127 14:20:55 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:18.127 14:20:55 spdkcli_raid -- spdkcli/raid.sh@34 -- # timing_enter spdkcli_check_match 00:21:18.127 14:20:55 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:18.127 14:20:55 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:18.127 14:20:55 spdkcli_raid -- spdkcli/raid.sh@35 -- # check_match 00:21:18.127 14:20:55 spdkcli_raid -- spdkcli/common.sh@44 -- # /home/vagrant/spdk_repo/spdk/scripts/spdkcli.py ll /bdevs 00:21:18.694 14:20:55 spdkcli_raid -- spdkcli/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/test/app/match/match /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test.match 00:21:18.694 14:20:55 spdkcli_raid -- spdkcli/common.sh@46 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_raid.test 00:21:18.694 14:20:55 spdkcli_raid -- spdkcli/raid.sh@36 -- # timing_exit spdkcli_check_match 00:21:18.694 14:20:55 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:18.694 14:20:55 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:18.694 14:20:55 spdkcli_raid -- spdkcli/raid.sh@38 -- # timing_enter spdkcli_delete_raid 00:21:18.694 14:20:55 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:18.694 14:20:55 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:18.694 14:20:55 spdkcli_raid -- spdkcli/raid.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/raid_volume delete testraid'\'' '\'''\'' True 00:21:18.694 ' 00:21:20.078 Executing command: ['/bdevs/raid_volume delete testraid', '', True] 00:21:20.078 14:20:57 spdkcli_raid -- spdkcli/raid.sh@41 -- # timing_exit spdkcli_delete_raid 00:21:20.078 14:20:57 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:20.078 14:20:57 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:20.078 14:20:57 spdkcli_raid -- spdkcli/raid.sh@43 -- # timing_enter spdkcli_delete_malloc 00:21:20.078 14:20:57 spdkcli_raid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:20.078 14:20:57 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:20.078 14:20:57 spdkcli_raid -- spdkcli/raid.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc delete Malloc1'\'' '\'''\'' True 00:21:20.078 '\''/bdevs/malloc delete Malloc2'\'' '\'''\'' True 00:21:20.078 ' 00:21:21.452 Executing command: ['/bdevs/malloc delete Malloc1', '', True] 00:21:21.452 Executing command: ['/bdevs/malloc delete Malloc2', '', True] 00:21:21.452 14:20:58 spdkcli_raid -- spdkcli/raid.sh@47 -- # timing_exit spdkcli_delete_malloc 00:21:21.452 14:20:58 spdkcli_raid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:21.452 14:20:58 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:21.452 14:20:58 spdkcli_raid -- spdkcli/raid.sh@49 -- # killprocess 90092 00:21:21.452 14:20:58 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90092 ']' 00:21:21.452 14:20:58 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90092 00:21:21.452 14:20:58 spdkcli_raid -- common/autotest_common.sh@959 -- # uname 00:21:21.711 14:20:58 spdkcli_raid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:21.711 14:20:58 spdkcli_raid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90092 00:21:21.711 killing process with pid 90092 00:21:21.711 14:20:58 spdkcli_raid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:21.711 14:20:58 spdkcli_raid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:21.711 14:20:58 spdkcli_raid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90092' 00:21:21.711 14:20:58 spdkcli_raid -- common/autotest_common.sh@973 -- # kill 90092 00:21:21.711 14:20:58 spdkcli_raid -- common/autotest_common.sh@978 -- # wait 90092 00:21:24.257 14:21:00 spdkcli_raid -- spdkcli/raid.sh@1 -- # cleanup 00:21:24.257 14:21:00 spdkcli_raid -- spdkcli/common.sh@10 -- # '[' -n 90092 ']' 00:21:24.257 14:21:00 spdkcli_raid -- spdkcli/common.sh@11 -- # killprocess 90092 00:21:24.257 14:21:00 spdkcli_raid -- common/autotest_common.sh@954 -- # '[' -z 90092 ']' 00:21:24.257 14:21:00 spdkcli_raid -- common/autotest_common.sh@958 -- # kill -0 90092 00:21:24.257 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (90092) - No such process 00:21:24.257 Process with pid 90092 is not found 00:21:24.257 14:21:00 spdkcli_raid -- common/autotest_common.sh@981 -- # echo 'Process with pid 90092 is not found' 00:21:24.257 14:21:00 spdkcli_raid -- spdkcli/common.sh@13 -- # '[' -n '' ']' 00:21:24.257 14:21:00 spdkcli_raid -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:21:24.257 14:21:00 spdkcli_raid -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:21:24.257 14:21:00 spdkcli_raid -- spdkcli/common.sh@22 -- # rm -f /home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_raid.test /home/vagrant/spdk_repo/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:21:24.257 ************************************ 00:21:24.257 END TEST spdkcli_raid 00:21:24.257 ************************************ 00:21:24.257 00:21:24.257 real 0m10.465s 00:21:24.257 user 0m21.763s 00:21:24.257 sys 0m1.185s 00:21:24.257 14:21:00 spdkcli_raid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:24.257 14:21:00 spdkcli_raid -- common/autotest_common.sh@10 -- # set +x 00:21:24.257 14:21:01 -- spdk/autotest.sh@191 -- # run_test blockdev_raid5f /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:21:24.257 14:21:01 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:24.258 14:21:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:24.258 14:21:01 -- common/autotest_common.sh@10 -- # set +x 00:21:24.258 ************************************ 00:21:24.258 START TEST blockdev_raid5f 00:21:24.258 ************************************ 00:21:24.258 14:21:01 blockdev_raid5f -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh raid5f 00:21:24.258 * Looking for test storage... 00:21:24.258 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:21:24.258 14:21:01 blockdev_raid5f -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:24.258 14:21:01 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lcov --version 00:21:24.258 14:21:01 blockdev_raid5f -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:24.258 14:21:01 blockdev_raid5f -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:24.258 14:21:01 blockdev_raid5f -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:24.258 14:21:01 blockdev_raid5f -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:24.258 14:21:01 blockdev_raid5f -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:24.258 14:21:01 blockdev_raid5f -- scripts/common.sh@336 -- # IFS=.-: 00:21:24.258 14:21:01 blockdev_raid5f -- scripts/common.sh@336 -- # read -ra ver1 00:21:24.258 14:21:01 blockdev_raid5f -- scripts/common.sh@337 -- # IFS=.-: 00:21:24.258 14:21:01 blockdev_raid5f -- scripts/common.sh@337 -- # read -ra ver2 00:21:24.258 14:21:01 blockdev_raid5f -- scripts/common.sh@338 -- # local 'op=<' 00:21:24.258 14:21:01 blockdev_raid5f -- scripts/common.sh@340 -- # ver1_l=2 00:21:24.258 14:21:01 blockdev_raid5f -- scripts/common.sh@341 -- # ver2_l=1 00:21:24.258 14:21:01 blockdev_raid5f -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:24.258 14:21:01 blockdev_raid5f -- scripts/common.sh@344 -- # case "$op" in 00:21:24.258 14:21:01 blockdev_raid5f -- scripts/common.sh@345 -- # : 1 00:21:24.258 14:21:01 blockdev_raid5f -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:24.258 14:21:01 blockdev_raid5f -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:24.258 14:21:01 blockdev_raid5f -- scripts/common.sh@365 -- # decimal 1 00:21:24.258 14:21:01 blockdev_raid5f -- scripts/common.sh@353 -- # local d=1 00:21:24.258 14:21:01 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:24.258 14:21:01 blockdev_raid5f -- scripts/common.sh@355 -- # echo 1 00:21:24.258 14:21:01 blockdev_raid5f -- scripts/common.sh@365 -- # ver1[v]=1 00:21:24.258 14:21:01 blockdev_raid5f -- scripts/common.sh@366 -- # decimal 2 00:21:24.258 14:21:01 blockdev_raid5f -- scripts/common.sh@353 -- # local d=2 00:21:24.258 14:21:01 blockdev_raid5f -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:24.258 14:21:01 blockdev_raid5f -- scripts/common.sh@355 -- # echo 2 00:21:24.258 14:21:01 blockdev_raid5f -- scripts/common.sh@366 -- # ver2[v]=2 00:21:24.258 14:21:01 blockdev_raid5f -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:24.258 14:21:01 blockdev_raid5f -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:24.258 14:21:01 blockdev_raid5f -- scripts/common.sh@368 -- # return 0 00:21:24.258 14:21:01 blockdev_raid5f -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:24.258 14:21:01 blockdev_raid5f -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:24.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.258 --rc genhtml_branch_coverage=1 00:21:24.258 --rc genhtml_function_coverage=1 00:21:24.258 --rc genhtml_legend=1 00:21:24.258 --rc geninfo_all_blocks=1 00:21:24.258 --rc geninfo_unexecuted_blocks=1 00:21:24.258 00:21:24.258 ' 00:21:24.258 14:21:01 blockdev_raid5f -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:24.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.258 --rc genhtml_branch_coverage=1 00:21:24.258 --rc genhtml_function_coverage=1 00:21:24.258 --rc genhtml_legend=1 00:21:24.258 --rc geninfo_all_blocks=1 00:21:24.258 --rc geninfo_unexecuted_blocks=1 00:21:24.258 00:21:24.258 ' 00:21:24.258 14:21:01 blockdev_raid5f -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:24.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.258 --rc genhtml_branch_coverage=1 00:21:24.258 --rc genhtml_function_coverage=1 00:21:24.258 --rc genhtml_legend=1 00:21:24.258 --rc geninfo_all_blocks=1 00:21:24.258 --rc geninfo_unexecuted_blocks=1 00:21:24.258 00:21:24.258 ' 00:21:24.258 14:21:01 blockdev_raid5f -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:24.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:24.258 --rc genhtml_branch_coverage=1 00:21:24.258 --rc genhtml_function_coverage=1 00:21:24.258 --rc genhtml_legend=1 00:21:24.258 --rc geninfo_all_blocks=1 00:21:24.258 --rc geninfo_unexecuted_blocks=1 00:21:24.258 00:21:24.258 ' 00:21:24.258 14:21:01 blockdev_raid5f -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:21:24.258 14:21:01 blockdev_raid5f -- bdev/nbd_common.sh@6 -- # set -e 00:21:24.258 14:21:01 blockdev_raid5f -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:21:24.258 14:21:01 blockdev_raid5f -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:24.258 14:21:01 blockdev_raid5f -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:21:24.258 14:21:01 blockdev_raid5f -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:21:24.258 14:21:01 blockdev_raid5f -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:21:24.258 14:21:01 blockdev_raid5f -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:21:24.258 14:21:01 blockdev_raid5f -- bdev/blockdev.sh@20 -- # : 00:21:24.258 14:21:01 blockdev_raid5f -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:21:24.258 14:21:01 blockdev_raid5f -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:21:24.258 14:21:01 blockdev_raid5f -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:21:24.258 14:21:01 blockdev_raid5f -- bdev/blockdev.sh@711 -- # uname -s 00:21:24.258 14:21:01 blockdev_raid5f -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:21:24.258 14:21:01 blockdev_raid5f -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:21:24.258 14:21:01 blockdev_raid5f -- bdev/blockdev.sh@719 -- # test_type=raid5f 00:21:24.258 14:21:01 blockdev_raid5f -- bdev/blockdev.sh@720 -- # crypto_device= 00:21:24.258 14:21:01 blockdev_raid5f -- bdev/blockdev.sh@721 -- # dek= 00:21:24.258 14:21:01 blockdev_raid5f -- bdev/blockdev.sh@722 -- # env_ctx= 00:21:24.258 14:21:01 blockdev_raid5f -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:21:24.258 14:21:01 blockdev_raid5f -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:21:24.258 14:21:01 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == bdev ]] 00:21:24.258 14:21:01 blockdev_raid5f -- bdev/blockdev.sh@727 -- # [[ raid5f == crypto_* ]] 00:21:24.258 14:21:01 blockdev_raid5f -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:21:24.258 14:21:01 blockdev_raid5f -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=90367 00:21:24.258 14:21:01 blockdev_raid5f -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:21:24.258 14:21:01 blockdev_raid5f -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:21:24.258 14:21:01 blockdev_raid5f -- bdev/blockdev.sh@49 -- # waitforlisten 90367 00:21:24.258 14:21:01 blockdev_raid5f -- common/autotest_common.sh@835 -- # '[' -z 90367 ']' 00:21:24.258 14:21:01 blockdev_raid5f -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.258 14:21:01 blockdev_raid5f -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:24.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.258 14:21:01 blockdev_raid5f -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.258 14:21:01 blockdev_raid5f -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:24.258 14:21:01 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:24.258 [2024-11-27 14:21:01.364610] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:21:24.258 [2024-11-27 14:21:01.364838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90367 ] 00:21:24.517 [2024-11-27 14:21:01.562584] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.517 [2024-11-27 14:21:01.721265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.453 14:21:02 blockdev_raid5f -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:25.453 14:21:02 blockdev_raid5f -- common/autotest_common.sh@868 -- # return 0 00:21:25.453 14:21:02 blockdev_raid5f -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:21:25.453 14:21:02 blockdev_raid5f -- bdev/blockdev.sh@763 -- # setup_raid5f_conf 00:21:25.453 14:21:02 blockdev_raid5f -- bdev/blockdev.sh@279 -- # rpc_cmd 00:21:25.453 14:21:02 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.453 14:21:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:25.453 Malloc0 00:21:25.453 Malloc1 00:21:25.453 Malloc2 00:21:25.453 14:21:02 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.453 14:21:02 blockdev_raid5f -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:21:25.453 14:21:02 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.453 14:21:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:25.453 14:21:02 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.453 14:21:02 blockdev_raid5f -- bdev/blockdev.sh@777 -- # cat 00:21:25.453 14:21:02 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:21:25.453 14:21:02 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.453 14:21:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:25.713 14:21:02 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.713 14:21:02 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:21:25.713 14:21:02 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.713 14:21:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:25.713 14:21:02 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.713 14:21:02 blockdev_raid5f -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:21:25.713 14:21:02 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.713 14:21:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:25.713 14:21:02 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.713 14:21:02 blockdev_raid5f -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:21:25.713 14:21:02 blockdev_raid5f -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:21:25.713 14:21:02 blockdev_raid5f -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:21:25.713 14:21:02 blockdev_raid5f -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.713 14:21:02 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:25.713 14:21:02 blockdev_raid5f -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.713 14:21:02 blockdev_raid5f -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:21:25.713 14:21:02 blockdev_raid5f -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "893c896a-01ef-4d57-9720-2d103551d296"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "893c896a-01ef-4d57-9720-2d103551d296",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "893c896a-01ef-4d57-9720-2d103551d296",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "2d3dc648-dd4d-4dca-a6af-c23bc8bd17e5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "57f5afd4-68e0-44e7-8ad4-8f504c09a1fb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "0bd718e0-b9cb-40b6-bd87-5cd1e1b1de49",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:21:25.713 14:21:02 blockdev_raid5f -- bdev/blockdev.sh@786 -- # jq -r .name 00:21:25.713 14:21:02 blockdev_raid5f -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:21:25.713 14:21:02 blockdev_raid5f -- bdev/blockdev.sh@789 -- # hello_world_bdev=raid5f 00:21:25.713 14:21:02 blockdev_raid5f -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:21:25.713 14:21:02 blockdev_raid5f -- bdev/blockdev.sh@791 -- # killprocess 90367 00:21:25.713 14:21:02 blockdev_raid5f -- common/autotest_common.sh@954 -- # '[' -z 90367 ']' 00:21:25.713 14:21:02 blockdev_raid5f -- common/autotest_common.sh@958 -- # kill -0 90367 00:21:25.713 14:21:02 blockdev_raid5f -- common/autotest_common.sh@959 -- # uname 00:21:25.713 14:21:02 blockdev_raid5f -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:25.713 14:21:02 blockdev_raid5f -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90367 00:21:25.713 killing process with pid 90367 00:21:25.713 14:21:02 blockdev_raid5f -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:25.713 14:21:02 blockdev_raid5f -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:25.713 14:21:02 blockdev_raid5f -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90367' 00:21:25.713 14:21:02 blockdev_raid5f -- common/autotest_common.sh@973 -- # kill 90367 00:21:25.713 14:21:02 blockdev_raid5f -- common/autotest_common.sh@978 -- # wait 90367 00:21:28.247 14:21:05 blockdev_raid5f -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:28.247 14:21:05 blockdev_raid5f -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:21:28.247 14:21:05 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:21:28.247 14:21:05 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:28.247 14:21:05 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:28.247 ************************************ 00:21:28.247 START TEST bdev_hello_world 00:21:28.247 ************************************ 00:21:28.247 14:21:05 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b raid5f '' 00:21:28.247 [2024-11-27 14:21:05.519615] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:21:28.247 [2024-11-27 14:21:05.519763] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90434 ] 00:21:28.506 [2024-11-27 14:21:05.690361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.766 [2024-11-27 14:21:05.825443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.334 [2024-11-27 14:21:06.362153] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:21:29.334 [2024-11-27 14:21:06.362238] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev raid5f 00:21:29.334 [2024-11-27 14:21:06.362267] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:21:29.334 [2024-11-27 14:21:06.362891] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:21:29.334 [2024-11-27 14:21:06.363235] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:21:29.334 [2024-11-27 14:21:06.363280] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:21:29.334 [2024-11-27 14:21:06.363359] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:21:29.334 00:21:29.334 [2024-11-27 14:21:06.363389] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:21:30.712 00:21:30.712 real 0m2.229s 00:21:30.712 user 0m1.785s 00:21:30.712 sys 0m0.317s 00:21:30.712 14:21:07 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:30.712 14:21:07 blockdev_raid5f.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:21:30.712 ************************************ 00:21:30.712 END TEST bdev_hello_world 00:21:30.712 ************************************ 00:21:30.712 14:21:07 blockdev_raid5f -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:21:30.712 14:21:07 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:30.712 14:21:07 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:30.712 14:21:07 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:30.712 ************************************ 00:21:30.712 START TEST bdev_bounds 00:21:30.712 ************************************ 00:21:30.712 Process bdevio pid: 90481 00:21:30.712 14:21:07 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:21:30.712 14:21:07 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=90481 00:21:30.712 14:21:07 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:30.712 14:21:07 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:21:30.712 14:21:07 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 90481' 00:21:30.712 14:21:07 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 90481 00:21:30.712 14:21:07 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 90481 ']' 00:21:30.712 14:21:07 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:30.712 14:21:07 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:30.712 14:21:07 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:30.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:30.712 14:21:07 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:30.712 14:21:07 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:21:30.712 [2024-11-27 14:21:07.819811] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:21:30.712 [2024-11-27 14:21:07.820261] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90481 ] 00:21:30.971 [2024-11-27 14:21:08.001940] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:30.971 [2024-11-27 14:21:08.127753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:30.971 [2024-11-27 14:21:08.127839] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.971 [2024-11-27 14:21:08.127873] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:31.538 14:21:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:31.538 14:21:08 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:21:31.538 14:21:08 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:21:31.797 I/O targets: 00:21:31.797 raid5f: 131072 blocks of 512 bytes (64 MiB) 00:21:31.797 00:21:31.797 00:21:31.797 CUnit - A unit testing framework for C - Version 2.1-3 00:21:31.797 http://cunit.sourceforge.net/ 00:21:31.797 00:21:31.797 00:21:31.797 Suite: bdevio tests on: raid5f 00:21:31.797 Test: blockdev write read block ...passed 00:21:31.797 Test: blockdev write zeroes read block ...passed 00:21:31.797 Test: blockdev write zeroes read no split ...passed 00:21:31.797 Test: blockdev write zeroes read split ...passed 00:21:32.059 Test: blockdev write zeroes read split partial ...passed 00:21:32.059 Test: blockdev reset ...passed 00:21:32.059 Test: blockdev write read 8 blocks ...passed 00:21:32.059 Test: blockdev write read size > 128k ...passed 00:21:32.059 Test: blockdev write read invalid size ...passed 00:21:32.059 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:21:32.059 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:21:32.059 Test: blockdev write read max offset ...passed 00:21:32.059 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:21:32.059 Test: blockdev writev readv 8 blocks ...passed 00:21:32.059 Test: blockdev writev readv 30 x 1block ...passed 00:21:32.059 Test: blockdev writev readv block ...passed 00:21:32.059 Test: blockdev writev readv size > 128k ...passed 00:21:32.059 Test: blockdev writev readv size > 128k in two iovs ...passed 00:21:32.059 Test: blockdev comparev and writev ...passed 00:21:32.059 Test: blockdev nvme passthru rw ...passed 00:21:32.059 Test: blockdev nvme passthru vendor specific ...passed 00:21:32.059 Test: blockdev nvme admin passthru ...passed 00:21:32.059 Test: blockdev copy ...passed 00:21:32.059 00:21:32.059 Run Summary: Type Total Ran Passed Failed Inactive 00:21:32.059 suites 1 1 n/a 0 0 00:21:32.059 tests 23 23 23 0 0 00:21:32.059 asserts 130 130 130 0 n/a 00:21:32.059 00:21:32.059 Elapsed time = 0.627 seconds 00:21:32.059 0 00:21:32.059 14:21:09 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 90481 00:21:32.059 14:21:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 90481 ']' 00:21:32.059 14:21:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 90481 00:21:32.059 14:21:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:21:32.059 14:21:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:32.059 14:21:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90481 00:21:32.059 14:21:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:32.059 14:21:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:32.059 14:21:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90481' 00:21:32.059 killing process with pid 90481 00:21:32.059 14:21:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@973 -- # kill 90481 00:21:32.059 14:21:09 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@978 -- # wait 90481 00:21:33.451 14:21:10 blockdev_raid5f.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:21:33.451 00:21:33.451 real 0m2.843s 00:21:33.451 user 0m7.052s 00:21:33.451 sys 0m0.430s 00:21:33.451 14:21:10 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:33.451 14:21:10 blockdev_raid5f.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:21:33.451 ************************************ 00:21:33.451 END TEST bdev_bounds 00:21:33.451 ************************************ 00:21:33.451 14:21:10 blockdev_raid5f -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:21:33.451 14:21:10 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:21:33.451 14:21:10 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:33.451 14:21:10 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:33.451 ************************************ 00:21:33.451 START TEST bdev_nbd 00:21:33.451 ************************************ 00:21:33.451 14:21:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json raid5f '' 00:21:33.451 14:21:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:21:33.451 14:21:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:21:33.451 14:21:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:33.451 14:21:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:21:33.451 14:21:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('raid5f') 00:21:33.451 14:21:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:21:33.451 14:21:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=1 00:21:33.451 14:21:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:21:33.451 14:21:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:21:33.451 14:21:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:21:33.451 14:21:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=1 00:21:33.451 14:21:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0') 00:21:33.451 14:21:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:21:33.451 14:21:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('raid5f') 00:21:33.451 14:21:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:21:33.451 14:21:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=90541 00:21:33.451 14:21:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:21:33.451 14:21:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 90541 /var/tmp/spdk-nbd.sock 00:21:33.451 14:21:10 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:21:33.451 14:21:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 90541 ']' 00:21:33.451 14:21:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:21:33.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:21:33.451 14:21:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:33.451 14:21:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:21:33.451 14:21:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:33.451 14:21:10 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:33.451 [2024-11-27 14:21:10.702585] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:21:33.451 [2024-11-27 14:21:10.703009] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.708 [2024-11-27 14:21:10.872093] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:33.966 [2024-11-27 14:21:10.991420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:34.532 14:21:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:34.532 14:21:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:21:34.532 14:21:11 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock raid5f 00:21:34.532 14:21:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:34.532 14:21:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('raid5f') 00:21:34.532 14:21:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:21:34.532 14:21:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock raid5f 00:21:34.532 14:21:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:34.532 14:21:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('raid5f') 00:21:34.532 14:21:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:21:34.532 14:21:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:21:34.532 14:21:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:21:34.532 14:21:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:21:34.532 14:21:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:21:34.532 14:21:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f 00:21:34.790 14:21:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:21:34.790 14:21:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:21:34.790 14:21:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:21:34.790 14:21:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:34.790 14:21:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:34.790 14:21:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:34.790 14:21:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:34.790 14:21:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:34.790 14:21:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:34.790 14:21:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:34.790 14:21:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:34.790 14:21:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:34.790 1+0 records in 00:21:34.790 1+0 records out 00:21:34.790 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000613574 s, 6.7 MB/s 00:21:34.790 14:21:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:34.790 14:21:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:34.790 14:21:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:34.790 14:21:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:34.790 14:21:11 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:34.790 14:21:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:21:34.790 14:21:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 1 )) 00:21:34.790 14:21:11 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:35.100 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:21:35.100 { 00:21:35.100 "nbd_device": "/dev/nbd0", 00:21:35.100 "bdev_name": "raid5f" 00:21:35.100 } 00:21:35.100 ]' 00:21:35.100 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:21:35.100 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:21:35.100 { 00:21:35.100 "nbd_device": "/dev/nbd0", 00:21:35.100 "bdev_name": "raid5f" 00:21:35.100 } 00:21:35.100 ]' 00:21:35.100 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:21:35.100 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:35.100 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:35.100 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:35.100 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:35.100 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:35.100 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:35.100 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:35.358 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:35.358 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:35.358 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:35.358 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:35.358 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:35.358 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:35.358 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:35.358 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:35.358 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:35.358 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:35.358 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:35.615 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:35.615 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:35.615 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:35.874 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:35.874 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:35.874 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:21:35.874 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:21:35.874 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:21:35.874 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:21:35.874 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:21:35.874 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:21:35.874 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:21:35.874 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:21:35.874 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:35.874 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('raid5f') 00:21:35.874 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:21:35.874 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0') 00:21:35.874 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:21:35.874 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock raid5f /dev/nbd0 00:21:35.874 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:35.874 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('raid5f') 00:21:35.874 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:21:35.874 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0') 00:21:35.874 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:21:35.874 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:21:35.874 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:21:35.874 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:35.874 14:21:12 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk raid5f /dev/nbd0 00:21:36.132 /dev/nbd0 00:21:36.132 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:21:36.132 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:21:36.132 14:21:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:21:36.132 14:21:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:21:36.132 14:21:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:21:36.132 14:21:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:21:36.132 14:21:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:21:36.132 14:21:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:21:36.132 14:21:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:21:36.132 14:21:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:21:36.132 14:21:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:21:36.132 1+0 records in 00:21:36.132 1+0 records out 00:21:36.132 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000600209 s, 6.8 MB/s 00:21:36.132 14:21:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:36.132 14:21:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:21:36.132 14:21:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:21:36.132 14:21:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:21:36.132 14:21:13 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:21:36.132 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:21:36.132 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 1 )) 00:21:36.132 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:36.132 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:36.132 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:36.389 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:21:36.389 { 00:21:36.389 "nbd_device": "/dev/nbd0", 00:21:36.389 "bdev_name": "raid5f" 00:21:36.389 } 00:21:36.389 ]' 00:21:36.389 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:21:36.389 { 00:21:36.389 "nbd_device": "/dev/nbd0", 00:21:36.389 "bdev_name": "raid5f" 00:21:36.389 } 00:21:36.389 ]' 00:21:36.389 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:36.389 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name=/dev/nbd0 00:21:36.389 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo /dev/nbd0 00:21:36.389 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:36.390 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=1 00:21:36.390 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 1 00:21:36.390 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=1 00:21:36.390 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 1 -ne 1 ']' 00:21:36.390 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify /dev/nbd0 write 00:21:36.390 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:21:36.390 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:36.390 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:21:36.390 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:36.390 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:21:36.390 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:21:36.648 256+0 records in 00:21:36.648 256+0 records out 00:21:36.648 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00755968 s, 139 MB/s 00:21:36.648 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:21:36.648 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:21:36.648 256+0 records in 00:21:36.648 256+0 records out 00:21:36.648 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0398726 s, 26.3 MB/s 00:21:36.648 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify /dev/nbd0 verify 00:21:36.648 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0') 00:21:36.648 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:21:36.648 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:21:36.648 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:36.648 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:21:36.648 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:21:36.648 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:21:36.648 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:21:36.648 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:21:36.648 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:36.648 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:36.648 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:36.648 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:36.648 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:36.648 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:36.648 14:21:13 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:36.961 14:21:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:36.961 14:21:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:36.961 14:21:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:36.961 14:21:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:36.961 14:21:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:36.961 14:21:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:36.961 14:21:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:36.961 14:21:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:36.961 14:21:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:21:36.961 14:21:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:36.961 14:21:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:21:37.221 14:21:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:21:37.221 14:21:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:21:37.221 14:21:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:21:37.221 14:21:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:21:37.221 14:21:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:21:37.221 14:21:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:21:37.221 14:21:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:21:37.221 14:21:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:21:37.221 14:21:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:21:37.221 14:21:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:21:37.221 14:21:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:21:37.221 14:21:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:21:37.221 14:21:14 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:37.221 14:21:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:37.222 14:21:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:21:37.222 14:21:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:21:37.481 malloc_lvol_verify 00:21:37.481 14:21:14 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:21:37.740 915c9af5-a13b-42fc-8470-7e8790e0894e 00:21:37.740 14:21:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:21:38.307 ae0759ff-bff5-458f-91f9-3da2a47fb01b 00:21:38.307 14:21:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:21:38.566 /dev/nbd0 00:21:38.567 14:21:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:21:38.567 14:21:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:21:38.567 14:21:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:21:38.567 14:21:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:21:38.567 14:21:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:21:38.567 mke2fs 1.47.0 (5-Feb-2023) 00:21:38.567 Discarding device blocks: 0/4096 done 00:21:38.567 Creating filesystem with 4096 1k blocks and 1024 inodes 00:21:38.567 00:21:38.567 Allocating group tables: 0/1 done 00:21:38.567 Writing inode tables: 0/1 done 00:21:38.567 Creating journal (1024 blocks): done 00:21:38.567 Writing superblocks and filesystem accounting information: 0/1 done 00:21:38.567 00:21:38.567 14:21:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:21:38.567 14:21:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:21:38.567 14:21:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:21:38.567 14:21:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:21:38.567 14:21:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:21:38.567 14:21:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:21:38.567 14:21:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:21:38.825 14:21:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:21:38.825 14:21:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:21:38.825 14:21:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:21:38.825 14:21:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:21:38.825 14:21:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:21:38.825 14:21:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:21:38.825 14:21:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:21:38.825 14:21:15 blockdev_raid5f.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:21:38.825 14:21:15 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 90541 00:21:38.825 14:21:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 90541 ']' 00:21:38.825 14:21:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 90541 00:21:38.825 14:21:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:21:38.825 14:21:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:38.825 14:21:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 90541 00:21:38.825 14:21:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:38.825 killing process with pid 90541 00:21:38.825 14:21:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:38.825 14:21:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 90541' 00:21:38.825 14:21:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@973 -- # kill 90541 00:21:38.825 14:21:15 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@978 -- # wait 90541 00:21:40.202 14:21:17 blockdev_raid5f.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:21:40.202 00:21:40.202 real 0m6.692s 00:21:40.202 user 0m9.712s 00:21:40.202 sys 0m1.389s 00:21:40.202 14:21:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:40.202 ************************************ 00:21:40.202 14:21:17 blockdev_raid5f.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:21:40.202 END TEST bdev_nbd 00:21:40.202 ************************************ 00:21:40.202 14:21:17 blockdev_raid5f -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:21:40.202 14:21:17 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = nvme ']' 00:21:40.202 14:21:17 blockdev_raid5f -- bdev/blockdev.sh@801 -- # '[' raid5f = gpt ']' 00:21:40.202 14:21:17 blockdev_raid5f -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:21:40.202 14:21:17 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:40.202 14:21:17 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:40.202 14:21:17 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:40.202 ************************************ 00:21:40.202 START TEST bdev_fio 00:21:40.202 ************************************ 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:21:40.202 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_raid5f]' 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=raid5f 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:40.202 ************************************ 00:21:40.202 START TEST bdev_fio_rw_verify 00:21:40.202 ************************************ 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:21:40.202 14:21:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:21:40.461 14:21:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:21:40.461 14:21:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:21:40.461 14:21:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:21:40.461 14:21:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:21:40.461 14:21:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:21:40.461 14:21:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:21:40.461 14:21:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:21:40.461 14:21:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:21:40.461 14:21:17 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:21:40.720 job_raid5f: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:21:40.720 fio-3.35 00:21:40.720 Starting 1 thread 00:21:52.943 00:21:52.943 job_raid5f: (groupid=0, jobs=1): err= 0: pid=90745: Wed Nov 27 14:21:28 2024 00:21:52.943 read: IOPS=8089, BW=31.6MiB/s (33.1MB/s)(316MiB/10001msec) 00:21:52.943 slat (usec): min=24, max=226, avg=31.30, stdev= 6.78 00:21:52.943 clat (usec): min=13, max=900, avg=197.26, stdev=77.39 00:21:52.944 lat (usec): min=43, max=968, avg=228.56, stdev=78.73 00:21:52.944 clat percentiles (usec): 00:21:52.944 | 50.000th=[ 198], 99.000th=[ 359], 99.900th=[ 523], 99.990th=[ 799], 00:21:52.944 | 99.999th=[ 898] 00:21:52.944 write: IOPS=8505, BW=33.2MiB/s (34.8MB/s)(329MiB/9889msec); 0 zone resets 00:21:52.944 slat (usec): min=11, max=140, avg=24.30, stdev= 6.95 00:21:52.944 clat (usec): min=93, max=1367, avg=447.96, stdev=67.33 00:21:52.944 lat (usec): min=118, max=1441, avg=472.25, stdev=69.58 00:21:52.944 clat percentiles (usec): 00:21:52.944 | 50.000th=[ 449], 99.000th=[ 635], 99.900th=[ 938], 99.990th=[ 1270], 00:21:52.944 | 99.999th=[ 1369] 00:21:52.944 bw ( KiB/s): min=31784, max=35096, per=98.84%, avg=33627.37, stdev=880.06, samples=19 00:21:52.944 iops : min= 7946, max= 8774, avg=8406.84, stdev=220.01, samples=19 00:21:52.944 lat (usec) : 20=0.01%, 50=0.01%, 100=5.68%, 250=29.26%, 500=56.31% 00:21:52.944 lat (usec) : 750=8.62%, 1000=0.09% 00:21:52.944 lat (msec) : 2=0.04% 00:21:52.944 cpu : usr=98.56%, sys=0.51%, ctx=27, majf=0, minf=7110 00:21:52.944 IO depths : 1=7.8%, 2=19.9%, 4=55.2%, 8=17.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:21:52.944 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:52.944 complete : 0=0.0%, 4=90.1%, 8=9.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:52.944 issued rwts: total=80904,84106,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:52.944 latency : target=0, window=0, percentile=100.00%, depth=8 00:21:52.944 00:21:52.944 Run status group 0 (all jobs): 00:21:52.944 READ: bw=31.6MiB/s (33.1MB/s), 31.6MiB/s-31.6MiB/s (33.1MB/s-33.1MB/s), io=316MiB (331MB), run=10001-10001msec 00:21:52.944 WRITE: bw=33.2MiB/s (34.8MB/s), 33.2MiB/s-33.2MiB/s (34.8MB/s-34.8MB/s), io=329MiB (344MB), run=9889-9889msec 00:21:53.203 ----------------------------------------------------- 00:21:53.203 Suppressions used: 00:21:53.203 count bytes template 00:21:53.203 1 7 /usr/src/fio/parse.c 00:21:53.203 677 64992 /usr/src/fio/iolog.c 00:21:53.203 1 8 libtcmalloc_minimal.so 00:21:53.203 1 904 libcrypto.so 00:21:53.203 ----------------------------------------------------- 00:21:53.203 00:21:53.203 00:21:53.203 real 0m12.923s 00:21:53.203 user 0m13.234s 00:21:53.203 sys 0m0.809s 00:21:53.203 14:21:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:53.203 14:21:30 blockdev_raid5f.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:21:53.203 ************************************ 00:21:53.203 END TEST bdev_fio_rw_verify 00:21:53.203 ************************************ 00:21:53.203 14:21:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:21:53.203 14:21:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:53.203 14:21:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:21:53.203 14:21:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:53.203 14:21:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:21:53.203 14:21:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:21:53.203 14:21:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:21:53.203 14:21:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:21:53.203 14:21:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:21:53.203 14:21:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:21:53.203 14:21:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:21:53.203 14:21:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:53.203 14:21:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:21:53.203 14:21:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:21:53.203 14:21:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:21:53.203 14:21:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:21:53.203 14:21:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "raid5f",' ' "aliases": [' ' "893c896a-01ef-4d57-9720-2d103551d296"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "893c896a-01ef-4d57-9720-2d103551d296",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "raid": {' ' "uuid": "893c896a-01ef-4d57-9720-2d103551d296",' ' "strip_size_kb": 2,' ' "state": "online",' ' "raid_level": "raid5f",' ' "superblock": false,' ' "num_base_bdevs": 3,' ' "num_base_bdevs_discovered": 3,' ' "num_base_bdevs_operational": 3,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc0",' ' "uuid": "2d3dc648-dd4d-4dca-a6af-c23bc8bd17e5",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc1",' ' "uuid": "57f5afd4-68e0-44e7-8ad4-8f504c09a1fb",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc2",' ' "uuid": "0bd718e0-b9cb-40b6-bd87-5cd1e1b1de49",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' 00:21:53.203 14:21:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:21:53.462 14:21:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:21:53.462 14:21:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:21:53.462 /home/vagrant/spdk_repo/spdk 00:21:53.462 14:21:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:21:53.462 14:21:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:21:53.462 14:21:30 blockdev_raid5f.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:21:53.462 00:21:53.462 real 0m13.150s 00:21:53.462 user 0m13.348s 00:21:53.462 sys 0m0.903s 00:21:53.462 14:21:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:53.462 ************************************ 00:21:53.462 END TEST bdev_fio 00:21:53.462 ************************************ 00:21:53.462 14:21:30 blockdev_raid5f.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:21:53.462 14:21:30 blockdev_raid5f -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:53.462 14:21:30 blockdev_raid5f -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:53.462 14:21:30 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:21:53.462 14:21:30 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:53.462 14:21:30 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:21:53.462 ************************************ 00:21:53.462 START TEST bdev_verify 00:21:53.462 ************************************ 00:21:53.462 14:21:30 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:21:53.462 [2024-11-27 14:21:30.674213] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:21:53.462 [2024-11-27 14:21:30.674431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid90909 ] 00:21:53.720 [2024-11-27 14:21:30.868484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:53.978 [2024-11-27 14:21:31.045881] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.978 [2024-11-27 14:21:31.045893] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:54.547 Running I/O for 5 seconds... 00:21:56.419 9036.00 IOPS, 35.30 MiB/s [2024-11-27T14:21:35.073Z] 9546.50 IOPS, 37.29 MiB/s [2024-11-27T14:21:36.009Z] 10658.33 IOPS, 41.63 MiB/s [2024-11-27T14:21:36.944Z] 11214.25 IOPS, 43.81 MiB/s [2024-11-27T14:21:36.944Z] 11501.00 IOPS, 44.93 MiB/s 00:21:59.666 Latency(us) 00:21:59.666 [2024-11-27T14:21:36.944Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.666 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:59.666 Verification LBA range: start 0x0 length 0x2000 00:21:59.666 raid5f : 5.02 5724.84 22.36 0.00 0.00 33681.92 269.96 33840.41 00:21:59.666 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:59.666 Verification LBA range: start 0x2000 length 0x2000 00:21:59.666 raid5f : 5.02 5761.53 22.51 0.00 0.00 33503.69 145.22 33602.09 00:21:59.666 [2024-11-27T14:21:36.944Z] =================================================================================================================== 00:21:59.666 [2024-11-27T14:21:36.944Z] Total : 11486.37 44.87 0.00 0.00 33592.47 145.22 33840.41 00:22:01.041 00:22:01.041 real 0m7.558s 00:22:01.041 user 0m13.780s 00:22:01.041 sys 0m0.385s 00:22:01.041 14:21:38 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:01.041 14:21:38 blockdev_raid5f.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:22:01.041 ************************************ 00:22:01.041 END TEST bdev_verify 00:22:01.041 ************************************ 00:22:01.041 14:21:38 blockdev_raid5f -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:22:01.041 14:21:38 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:22:01.041 14:21:38 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:01.041 14:21:38 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:01.041 ************************************ 00:22:01.041 START TEST bdev_verify_big_io 00:22:01.041 ************************************ 00:22:01.041 14:21:38 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:22:01.041 [2024-11-27 14:21:38.278630] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:22:01.041 [2024-11-27 14:21:38.278881] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91002 ] 00:22:01.301 [2024-11-27 14:21:38.469104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:22:01.560 [2024-11-27 14:21:38.627674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:01.560 [2024-11-27 14:21:38.627679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:02.127 Running I/O for 5 seconds... 00:22:03.996 506.00 IOPS, 31.62 MiB/s [2024-11-27T14:21:42.651Z] 632.00 IOPS, 39.50 MiB/s [2024-11-27T14:21:43.588Z] 676.00 IOPS, 42.25 MiB/s [2024-11-27T14:21:44.526Z] 681.25 IOPS, 42.58 MiB/s [2024-11-27T14:21:44.526Z] 697.60 IOPS, 43.60 MiB/s 00:22:07.248 Latency(us) 00:22:07.248 [2024-11-27T14:21:44.526Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:07.248 Job: raid5f (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:22:07.248 Verification LBA range: start 0x0 length 0x200 00:22:07.248 raid5f : 5.17 343.64 21.48 0.00 0.00 9244914.93 255.07 417523.90 00:22:07.248 Job: raid5f (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:22:07.248 Verification LBA range: start 0x200 length 0x200 00:22:07.248 raid5f : 5.16 344.77 21.55 0.00 0.00 9192117.49 281.13 419430.40 00:22:07.248 [2024-11-27T14:21:44.526Z] =================================================================================================================== 00:22:07.248 [2024-11-27T14:21:44.526Z] Total : 688.41 43.03 0.00 0.00 9218516.21 255.07 419430.40 00:22:08.644 00:22:08.644 real 0m7.552s 00:22:08.644 user 0m13.809s 00:22:08.644 sys 0m0.348s 00:22:08.644 14:21:45 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:08.644 14:21:45 blockdev_raid5f.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:22:08.644 ************************************ 00:22:08.644 END TEST bdev_verify_big_io 00:22:08.644 ************************************ 00:22:08.644 14:21:45 blockdev_raid5f -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:08.644 14:21:45 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:22:08.644 14:21:45 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:08.644 14:21:45 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:08.644 ************************************ 00:22:08.644 START TEST bdev_write_zeroes 00:22:08.644 ************************************ 00:22:08.644 14:21:45 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:08.644 [2024-11-27 14:21:45.892204] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:22:08.644 [2024-11-27 14:21:45.892415] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91103 ] 00:22:08.903 [2024-11-27 14:21:46.085859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.162 [2024-11-27 14:21:46.246062] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:09.730 Running I/O for 1 seconds... 00:22:10.666 20223.00 IOPS, 79.00 MiB/s 00:22:10.666 Latency(us) 00:22:10.666 [2024-11-27T14:21:47.944Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:10.666 Job: raid5f (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:22:10.666 raid5f : 1.01 20191.79 78.87 0.00 0.00 6314.45 2040.55 9830.40 00:22:10.666 [2024-11-27T14:21:47.944Z] =================================================================================================================== 00:22:10.666 [2024-11-27T14:21:47.944Z] Total : 20191.79 78.87 0.00 0.00 6314.45 2040.55 9830.40 00:22:12.043 00:22:12.043 real 0m3.345s 00:22:12.043 user 0m2.899s 00:22:12.043 sys 0m0.312s 00:22:12.043 14:21:49 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:12.043 14:21:49 blockdev_raid5f.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:22:12.043 ************************************ 00:22:12.043 END TEST bdev_write_zeroes 00:22:12.043 ************************************ 00:22:12.043 14:21:49 blockdev_raid5f -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:12.043 14:21:49 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:22:12.043 14:21:49 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:12.043 14:21:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:12.043 ************************************ 00:22:12.043 START TEST bdev_json_nonenclosed 00:22:12.043 ************************************ 00:22:12.043 14:21:49 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:12.043 [2024-11-27 14:21:49.289445] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:22:12.043 [2024-11-27 14:21:49.289666] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91156 ] 00:22:12.342 [2024-11-27 14:21:49.475199] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:12.599 [2024-11-27 14:21:49.620275] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:12.599 [2024-11-27 14:21:49.620397] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:22:12.599 [2024-11-27 14:21:49.620437] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:22:12.599 [2024-11-27 14:21:49.620452] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:12.858 00:22:12.858 real 0m0.724s 00:22:12.858 user 0m0.479s 00:22:12.858 sys 0m0.139s 00:22:12.858 14:21:49 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:12.858 14:21:49 blockdev_raid5f.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:22:12.858 ************************************ 00:22:12.858 END TEST bdev_json_nonenclosed 00:22:12.858 ************************************ 00:22:12.858 14:21:49 blockdev_raid5f -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:12.858 14:21:49 blockdev_raid5f -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:22:12.858 14:21:49 blockdev_raid5f -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:12.858 14:21:49 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:12.858 ************************************ 00:22:12.858 START TEST bdev_json_nonarray 00:22:12.858 ************************************ 00:22:12.858 14:21:49 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:22:12.858 [2024-11-27 14:21:50.086440] Starting SPDK v25.01-pre git sha1 38b931b23 / DPDK 24.03.0 initialization... 00:22:12.858 [2024-11-27 14:21:50.086612] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid91186 ] 00:22:13.116 [2024-11-27 14:21:50.269147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:13.375 [2024-11-27 14:21:50.415921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:13.375 [2024-11-27 14:21:50.416057] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:22:13.375 [2024-11-27 14:21:50.416087] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:22:13.375 [2024-11-27 14:21:50.416116] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:22:13.634 00:22:13.634 real 0m0.729s 00:22:13.634 user 0m0.486s 00:22:13.634 sys 0m0.136s 00:22:13.634 14:21:50 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:13.634 14:21:50 blockdev_raid5f.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:22:13.634 ************************************ 00:22:13.634 END TEST bdev_json_nonarray 00:22:13.634 ************************************ 00:22:13.634 14:21:50 blockdev_raid5f -- bdev/blockdev.sh@824 -- # [[ raid5f == bdev ]] 00:22:13.634 14:21:50 blockdev_raid5f -- bdev/blockdev.sh@832 -- # [[ raid5f == gpt ]] 00:22:13.634 14:21:50 blockdev_raid5f -- bdev/blockdev.sh@836 -- # [[ raid5f == crypto_sw ]] 00:22:13.634 14:21:50 blockdev_raid5f -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:22:13.634 14:21:50 blockdev_raid5f -- bdev/blockdev.sh@849 -- # cleanup 00:22:13.634 14:21:50 blockdev_raid5f -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:22:13.634 14:21:50 blockdev_raid5f -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:22:13.634 14:21:50 blockdev_raid5f -- bdev/blockdev.sh@26 -- # [[ raid5f == rbd ]] 00:22:13.634 14:21:50 blockdev_raid5f -- bdev/blockdev.sh@30 -- # [[ raid5f == daos ]] 00:22:13.634 14:21:50 blockdev_raid5f -- bdev/blockdev.sh@34 -- # [[ raid5f = \g\p\t ]] 00:22:13.634 14:21:50 blockdev_raid5f -- bdev/blockdev.sh@40 -- # [[ raid5f == xnvme ]] 00:22:13.634 00:22:13.634 real 0m49.708s 00:22:13.634 user 1m7.812s 00:22:13.634 sys 0m5.388s 00:22:13.634 14:21:50 blockdev_raid5f -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:13.634 14:21:50 blockdev_raid5f -- common/autotest_common.sh@10 -- # set +x 00:22:13.634 ************************************ 00:22:13.634 END TEST blockdev_raid5f 00:22:13.634 ************************************ 00:22:13.634 14:21:50 -- spdk/autotest.sh@194 -- # uname -s 00:22:13.634 14:21:50 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:22:13.634 14:21:50 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:22:13.634 14:21:50 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:22:13.634 14:21:50 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:22:13.634 14:21:50 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:22:13.634 14:21:50 -- spdk/autotest.sh@260 -- # timing_exit lib 00:22:13.634 14:21:50 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:13.634 14:21:50 -- common/autotest_common.sh@10 -- # set +x 00:22:13.634 14:21:50 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:22:13.634 14:21:50 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:22:13.634 14:21:50 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:22:13.634 14:21:50 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:22:13.634 14:21:50 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:22:13.634 14:21:50 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:22:13.634 14:21:50 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:22:13.634 14:21:50 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:22:13.634 14:21:50 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:22:13.634 14:21:50 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:22:13.634 14:21:50 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:22:13.634 14:21:50 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:22:13.634 14:21:50 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:22:13.634 14:21:50 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:22:13.634 14:21:50 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:22:13.634 14:21:50 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:22:13.634 14:21:50 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:22:13.634 14:21:50 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:22:13.634 14:21:50 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:22:13.635 14:21:50 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:22:13.635 14:21:50 -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:13.635 14:21:50 -- common/autotest_common.sh@10 -- # set +x 00:22:13.635 14:21:50 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:22:13.635 14:21:50 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:22:13.635 14:21:50 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:22:13.635 14:21:50 -- common/autotest_common.sh@10 -- # set +x 00:22:15.010 INFO: APP EXITING 00:22:15.010 INFO: killing all VMs 00:22:15.010 INFO: killing vhost app 00:22:15.010 INFO: EXIT DONE 00:22:15.279 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:15.537 Waiting for block devices as requested 00:22:15.537 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:22:15.537 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:22:16.472 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:22:16.472 Cleaning 00:22:16.472 Removing: /var/run/dpdk/spdk0/config 00:22:16.472 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:22:16.472 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:22:16.472 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:22:16.472 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:22:16.472 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:22:16.472 Removing: /var/run/dpdk/spdk0/hugepage_info 00:22:16.472 Removing: /dev/shm/spdk_tgt_trace.pid56718 00:22:16.472 Removing: /var/run/dpdk/spdk0 00:22:16.472 Removing: /var/run/dpdk/spdk_pid56484 00:22:16.472 Removing: /var/run/dpdk/spdk_pid56718 00:22:16.472 Removing: /var/run/dpdk/spdk_pid56948 00:22:16.472 Removing: /var/run/dpdk/spdk_pid57052 00:22:16.472 Removing: /var/run/dpdk/spdk_pid57103 00:22:16.472 Removing: /var/run/dpdk/spdk_pid57236 00:22:16.472 Removing: /var/run/dpdk/spdk_pid57261 00:22:16.472 Removing: /var/run/dpdk/spdk_pid57471 00:22:16.472 Removing: /var/run/dpdk/spdk_pid57577 00:22:16.472 Removing: /var/run/dpdk/spdk_pid57684 00:22:16.472 Removing: /var/run/dpdk/spdk_pid57806 00:22:16.472 Removing: /var/run/dpdk/spdk_pid57914 00:22:16.472 Removing: /var/run/dpdk/spdk_pid57959 00:22:16.472 Removing: /var/run/dpdk/spdk_pid57996 00:22:16.472 Removing: /var/run/dpdk/spdk_pid58066 00:22:16.472 Removing: /var/run/dpdk/spdk_pid58178 00:22:16.472 Removing: /var/run/dpdk/spdk_pid58658 00:22:16.472 Removing: /var/run/dpdk/spdk_pid58733 00:22:16.472 Removing: /var/run/dpdk/spdk_pid58807 00:22:16.472 Removing: /var/run/dpdk/spdk_pid58823 00:22:16.472 Removing: /var/run/dpdk/spdk_pid58980 00:22:16.473 Removing: /var/run/dpdk/spdk_pid58996 00:22:16.473 Removing: /var/run/dpdk/spdk_pid59146 00:22:16.473 Removing: /var/run/dpdk/spdk_pid59173 00:22:16.473 Removing: /var/run/dpdk/spdk_pid59237 00:22:16.473 Removing: /var/run/dpdk/spdk_pid59255 00:22:16.473 Removing: /var/run/dpdk/spdk_pid59319 00:22:16.473 Removing: /var/run/dpdk/spdk_pid59348 00:22:16.473 Removing: /var/run/dpdk/spdk_pid59543 00:22:16.473 Removing: /var/run/dpdk/spdk_pid59580 00:22:16.473 Removing: /var/run/dpdk/spdk_pid59663 00:22:16.473 Removing: /var/run/dpdk/spdk_pid61047 00:22:16.473 Removing: /var/run/dpdk/spdk_pid61258 00:22:16.473 Removing: /var/run/dpdk/spdk_pid61404 00:22:16.473 Removing: /var/run/dpdk/spdk_pid62064 00:22:16.473 Removing: /var/run/dpdk/spdk_pid62282 00:22:16.473 Removing: /var/run/dpdk/spdk_pid62422 00:22:16.473 Removing: /var/run/dpdk/spdk_pid63071 00:22:16.473 Removing: /var/run/dpdk/spdk_pid63412 00:22:16.473 Removing: /var/run/dpdk/spdk_pid63552 00:22:16.473 Removing: /var/run/dpdk/spdk_pid64975 00:22:16.473 Removing: /var/run/dpdk/spdk_pid65229 00:22:16.473 Removing: /var/run/dpdk/spdk_pid65380 00:22:16.473 Removing: /var/run/dpdk/spdk_pid66798 00:22:16.473 Removing: /var/run/dpdk/spdk_pid67058 00:22:16.473 Removing: /var/run/dpdk/spdk_pid67204 00:22:16.473 Removing: /var/run/dpdk/spdk_pid68617 00:22:16.473 Removing: /var/run/dpdk/spdk_pid69068 00:22:16.473 Removing: /var/run/dpdk/spdk_pid69219 00:22:16.473 Removing: /var/run/dpdk/spdk_pid70733 00:22:16.473 Removing: /var/run/dpdk/spdk_pid70999 00:22:16.473 Removing: /var/run/dpdk/spdk_pid71148 00:22:16.473 Removing: /var/run/dpdk/spdk_pid72661 00:22:16.473 Removing: /var/run/dpdk/spdk_pid72932 00:22:16.473 Removing: /var/run/dpdk/spdk_pid73078 00:22:16.473 Removing: /var/run/dpdk/spdk_pid74590 00:22:16.473 Removing: /var/run/dpdk/spdk_pid75090 00:22:16.473 Removing: /var/run/dpdk/spdk_pid75230 00:22:16.473 Removing: /var/run/dpdk/spdk_pid75379 00:22:16.473 Removing: /var/run/dpdk/spdk_pid75836 00:22:16.473 Removing: /var/run/dpdk/spdk_pid76600 00:22:16.473 Removing: /var/run/dpdk/spdk_pid76986 00:22:16.473 Removing: /var/run/dpdk/spdk_pid77689 00:22:16.473 Removing: /var/run/dpdk/spdk_pid78176 00:22:16.473 Removing: /var/run/dpdk/spdk_pid78974 00:22:16.473 Removing: /var/run/dpdk/spdk_pid79394 00:22:16.473 Removing: /var/run/dpdk/spdk_pid81396 00:22:16.473 Removing: /var/run/dpdk/spdk_pid81851 00:22:16.473 Removing: /var/run/dpdk/spdk_pid82296 00:22:16.473 Removing: /var/run/dpdk/spdk_pid84424 00:22:16.473 Removing: /var/run/dpdk/spdk_pid84915 00:22:16.731 Removing: /var/run/dpdk/spdk_pid85420 00:22:16.731 Removing: /var/run/dpdk/spdk_pid86502 00:22:16.731 Removing: /var/run/dpdk/spdk_pid86836 00:22:16.731 Removing: /var/run/dpdk/spdk_pid87788 00:22:16.731 Removing: /var/run/dpdk/spdk_pid88120 00:22:16.731 Removing: /var/run/dpdk/spdk_pid89077 00:22:16.731 Removing: /var/run/dpdk/spdk_pid89406 00:22:16.731 Removing: /var/run/dpdk/spdk_pid90092 00:22:16.731 Removing: /var/run/dpdk/spdk_pid90367 00:22:16.731 Removing: /var/run/dpdk/spdk_pid90434 00:22:16.731 Removing: /var/run/dpdk/spdk_pid90481 00:22:16.731 Removing: /var/run/dpdk/spdk_pid90734 00:22:16.731 Removing: /var/run/dpdk/spdk_pid90909 00:22:16.731 Removing: /var/run/dpdk/spdk_pid91002 00:22:16.731 Removing: /var/run/dpdk/spdk_pid91103 00:22:16.731 Removing: /var/run/dpdk/spdk_pid91156 00:22:16.731 Removing: /var/run/dpdk/spdk_pid91186 00:22:16.731 Clean 00:22:16.731 14:21:53 -- common/autotest_common.sh@1453 -- # return 0 00:22:16.731 14:21:53 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:22:16.731 14:21:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:16.731 14:21:53 -- common/autotest_common.sh@10 -- # set +x 00:22:16.732 14:21:53 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:22:16.732 14:21:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:16.732 14:21:53 -- common/autotest_common.sh@10 -- # set +x 00:22:16.732 14:21:53 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:22:16.732 14:21:53 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:22:16.732 14:21:53 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:22:16.732 14:21:53 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:22:16.732 14:21:53 -- spdk/autotest.sh@398 -- # hostname 00:22:16.732 14:21:53 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:22:16.990 geninfo: WARNING: invalid characters removed from testname! 00:22:49.087 14:22:20 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:49.087 14:22:24 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:50.466 14:22:27 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:53.056 14:22:30 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:56.347 14:22:32 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:22:58.880 14:22:35 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:23:01.414 14:22:38 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:23:01.414 14:22:38 -- spdk/autorun.sh@1 -- $ timing_finish 00:23:01.414 14:22:38 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:23:01.414 14:22:38 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:23:01.414 14:22:38 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:23:01.414 14:22:38 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:23:01.414 + [[ -n 5213 ]] 00:23:01.414 + sudo kill 5213 00:23:01.423 [Pipeline] } 00:23:01.441 [Pipeline] // timeout 00:23:01.449 [Pipeline] } 00:23:01.466 [Pipeline] // stage 00:23:01.472 [Pipeline] } 00:23:01.486 [Pipeline] // catchError 00:23:01.497 [Pipeline] stage 00:23:01.499 [Pipeline] { (Stop VM) 00:23:01.512 [Pipeline] sh 00:23:01.792 + vagrant halt 00:23:05.978 ==> default: Halting domain... 00:23:11.276 [Pipeline] sh 00:23:11.556 + vagrant destroy -f 00:23:14.879 ==> default: Removing domain... 00:23:15.152 [Pipeline] sh 00:23:15.434 + mv output /var/jenkins/workspace/raid-vg-autotest/output 00:23:15.443 [Pipeline] } 00:23:15.461 [Pipeline] // stage 00:23:15.467 [Pipeline] } 00:23:15.482 [Pipeline] // dir 00:23:15.490 [Pipeline] } 00:23:15.507 [Pipeline] // wrap 00:23:15.515 [Pipeline] } 00:23:15.528 [Pipeline] // catchError 00:23:15.538 [Pipeline] stage 00:23:15.540 [Pipeline] { (Epilogue) 00:23:15.554 [Pipeline] sh 00:23:15.835 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:23:22.413 [Pipeline] catchError 00:23:22.416 [Pipeline] { 00:23:22.430 [Pipeline] sh 00:23:22.713 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:23:22.713 Artifacts sizes are good 00:23:22.722 [Pipeline] } 00:23:22.739 [Pipeline] // catchError 00:23:22.753 [Pipeline] archiveArtifacts 00:23:22.761 Archiving artifacts 00:23:22.906 [Pipeline] cleanWs 00:23:22.942 [WS-CLEANUP] Deleting project workspace... 00:23:22.942 [WS-CLEANUP] Deferred wipeout is used... 00:23:22.950 [WS-CLEANUP] done 00:23:22.952 [Pipeline] } 00:23:22.971 [Pipeline] // stage 00:23:22.979 [Pipeline] } 00:23:22.995 [Pipeline] // node 00:23:23.002 [Pipeline] End of Pipeline 00:23:23.059 Finished: SUCCESS